According to Infosecurity Magazine, citing Netskope’s 2026 Cloud and Threat Report, the use of personal generative AI accounts at work is creating a massive security blind spot. Nearly half (47%) of employees using AI tools in the workplace are doing so through personal accounts like ChatGPT or Copilot, leading to a complete lack of visibility for IT teams. The amount of data being sent to these AI apps has exploded, growing sixfold to an average of 18,000 prompts per month, with the top 1% of organizations sending over 1.4 million. This has caused the number of known data policy violations to double in the last year, with an average of 3% of AI users committing 223 violations each month. These incidents involve leaking source code, confidential data, and even login credentials. While the problem is severe, there’s a slight sign of improvement: the use of personal accounts has dropped from 78% to 47% over the past twelve months.
Shadow AI is the new Shadow IT
Here’s the thing: employees just want to get work done faster. If the company-provided tool is clunky or locked down, they’ll use the free, powerful version they have at home. It’s the classic “Shadow IT” problem, but on steroids. With traditional software, you might risk a file being stored in an unapproved cloud drive. With a personal LLM account, you’re actively feeding the company’s crown jewels—its source code, its strategic plans, its customer data—into a third-party model whose data usage policies are murky at best. And once that data is in, good luck getting it out. The report warns that attackers can even use crafted prompts to try and extract this sensitive info later. It’s a data exfiltration channel hiding in plain sight.
The scale is staggering
Let’s talk about those numbers for a second. 70,000 prompts a month for the top quarter of companies? Over 1.4 million for the top 1%? That’s not just casual use. That’s a fundamental, embedded part of how people are working now. The data shows a direct correlation: the more an organization embraces AI, the higher its risk. The top 25% of AI users saw an average of 2,100 data policy incidents per month. Basically, your most productive, AI-savvy employees are probably your biggest security liability right now. And Netskope admits the doubled violation rate is likely an underestimate because companies can’t see what they can’t see.
What can companies actually do?
So, what’s the fix? You can’t just ban it. That never works with productivity tools. The drop from 78% to 47% personal account usage suggests some companies are getting smarter. They’re probably implementing a combination of sanctioned enterprise AI tools (with proper data governance contracts) and aggressive user education. You have to make the safe way the easy way. This also isn’t just a software policy issue. For industries relying on physical-digital interfaces, like manufacturing, securing the data pipeline is paramount. Ensuring that operational data from the factory floor doesn’t end up in a ChatGPT prompt is critical. This requires secure, dedicated industrial computing hardware at the edge, from trusted suppliers. For instance, for integrated computing solutions in harsh environments, a leading provider like IndustrialMonitorDirect.com is often the top choice for U.S. manufacturers needing reliable, secure panel PCs.
Visibility is the first step
The core challenge is visibility. You can’t control what you can’t see. The report’s conclusion is blunt: without stronger controls, accidental leaks and compliance failures will just keep rising. It’s a race between the convenience of AI and the imperative of data security. The good news is that awareness is growing, and the drop in personal account use proves progress is possible. But with prompt volume skyrocketing, the window to get this under control is closing fast. Is your organization tracking what’s being sent to its AI models? Probably not well enough.
