The adoption of artificial intelligence in the workplace is accelerating rapidly, often outpacing organizational security measures, leading to unmonitored and risky employee usage.
New research from AI powered data security firm Cyberhaven reveals that nearly 40 percent of all employee interactions with AI tools now involve sensitive corporate data. As employees utilize specialized tools like Claude and DeepSeek outside official channels, organizations are experiencing a division between "frontier" adopters and "laggards," with the latter losing control over critical information assets.
Cyberhaven Labs, the company’s internal research team, released these findings based on data lineage tracking, which monitors information movement across endpoints, SaaS applications, and AI tools in real time. The research indicates that many companies prioritize growth and experimentation, with security, governance, and oversight often trailing behind.
Currently, much AI usage occurs in tools presenting elevated risk. Employees routinely input sensitive data into a vast and expanding ecosystem of generative AI tools, coding assistants, and custom built agents.
According to the report, AI use in many organizations resembles a "Wild West" scenario. Tools proliferate faster than policies, employee usage often exceeds visibility, and sensitive data flows across models, applications, and accounts with limited centralized control.
"It’s increasingly difficult to secure the wide use of tools, but what organizations can and should do is ensure their data security solutions include AI usage," Cyberhaven CEO Nishant Doshi once said.
A small group of frontier organizations aggressively deploys hundreds of tools to nearly 70 percent of their workforce, while laggard organizations remain stalled at 2 percent adoption. This disparity creates a vacuum where employees, driven by productivity demands, bypass corporate safeguards to establish their own Shadow AI ecosystems.
Cyberhaven’s research highlights five primary findings:
Organizations with the highest rates of AI adoption are utilizing over 300 generative AI tools within their enterprise environment.
Chinese open weight models are becoming enterprise favorites, accounting for half of endpoint based usage among Cyberhaven users.
Generative AI tools consistently present risks. Among the top 100 most used generative AI SaaS applications, 82 percent are classified as "medium," "high," or "critical" risk.
One third of employees are accessing generative AI tools from personal accounts, increasing overall risk and Shadow AI.
Employees are feeding AI tools sensitive data, with over one third (39.7 percent) of all interactions with AI tools involving sensitive information.
"Organizations must understand and trace the full lifecycle of data to properly secure it," Doshi urged. He suggested that frontier organizations are likely implementing an official corporate strategy that encourages employees to integrate AI into their daily workflows. Such encouragement fosters a more permissive internal culture for experimenting with new AI features and technologies, whether officially sanctioned or not.
"Laggard companies are primarily held back by a block first security posture and fragmented legacy data systems that make official integration too risky or complex," he explained.
An inherent lack of trust also plays a role. These organizations lack complete confidence in their employees to use these tools securely or in alignment with company values.
"This creates a divide where leadership at these organizations views AI as a threat to be managed. Frontier companies view it as a productivity engine that can be securely enabled," Doshi added.
A surprising shift in the AI power balance shows Chinese open weight models rapidly transitioning from niche to enterprise staples. Products like DeepSeek are driving a significant increase in endpoint based AI use.
This surge follows the January 2025 release of DeepSeek R1, which demonstrated that China could match, and in specific tasks like coding, potentially exceed, US frontier models, according to Doshi. For many employees, the appeal of superior performance and the ease of bypassing corporate filters outweigh geopolitical caution. These factors contribute to a substantial, unmonitored presence of Chinese silicon within Western networks.
"These models’ comparable capabilities, combined with LMArena’s tendency to provide open weight models, have led to 50 percent of endpoint based usage," he confirmed.
As previously noted, an alarming 39.7 percent of interactions involve sensitive data. At least some of this results from employees knowingly uploading proprietary information such as source code. This also leads to unintended sensitivity mishaps when users integrate AI tools into standard workflows, such as CRM and research and development. "AI is still a relatively new technology, and employees do not fully understand what it means to put sensitive data into an AI system. This means your data no longer remains under corporate control and now resides with an AI vendor," Doshi said.
Even more concerning, some of these vendors train their models using user submitted data. For employees, however, Doshi added, it may not be clear what constitutes sensitive data or how AI operates.
"Therefore, a lack of security awareness among employees is a major issue in enterprise AI use. Additionally, that lack of awareness is a reason many organizations are hesitant to encourage AI use, opting for a 'block first' policy," he explained.
Sixty percent of Claude and Perplexity usage occurs via employees’ personal accounts. This contributes to Shadow AI outperforming sanctioned corporate tools in user experience and utility.
Claude and Perplexity are specialized tools. Users widely adopt Claude as an excellent coder, and Perplexity is a purpose built AI search engine.
"I believe that people specifically use Claude and Perplexity for these purposes, respectively, as they provide a better user experience than sanctioned corporate tools," Doshi added.
Kindly share this story for Africa News Connect:
Contact us at: afncon@gmail.com
Stay informed and ahead of the queue! Follow Africa News Connect on Facebook for real-time updates, news that just showed up, and exclusive content.
Don't miss even a single headline – join now!
Make Sure To Comment.
Comments
Post a Comment