AI isn’t just another productivity tool anymore—it’s everywhere. From Slack to Salesforce, AI is now deeply embedded in everyday SaaS apps, making it impossible to draw a hard line between traditional software and AI. Trying to block it outright? That’s a strategy destined to fail.
In fact, we’ve reached a tipping point. Companies like Shopify now require teams to prove AI can’t do a task before they can request new hires. Meanwhile, the average enterprise is already juggling 254 AI-enabled tools. Yet despite this explosive growth, most businesses are struggling to keep up with security and governance.
The Silent Spread of Risky AI Usage
Of the hundreds of AI tools being used across companies, 7% are developed in China—raising serious concerns around data sovereignty. DeepSeek, one of the more prominent names, caused enough alarm in the U.S. that the Pentagon intervened when it found employees using the tool. And DeepSeek is just the beginning.
Other Chinese AI platforms—like Ernie Bot, Manus, Kimi Moonshot, Qwen Chat, and Baidu Chat—are rapidly gaining ground. Backed by tech giants like Baidu and Alibaba, these tools are evolving fast and quietly creeping into Western workspaces. Given China’s regulatory environment, data shared with these tools could be subject to government access—an alarming reality for any organization handling sensitive or proprietary data.
The real risk? These tools don’t need official rollouts to infiltrate your workforce. Tech-savvy employees eager to boost productivity often experiment with the latest AI solutions on their own—and often without understanding the full consequences.
Employees Love AI—Even If It Puts Data at Risk
For many workers, AI is too valuable to ignore. In fact, a Fishbowl survey found that 68% of employees keep their ChatGPT usage secret from their bosses. Nearly half admitted they’d continue using it even if it were banned.
Between January and March, a study of 176,000 AI prompts revealed that 6.7% potentially leaked company data. Even more troubling? A large share of these submissions came from personal email accounts. Nearly half (45.4%) of sensitive inputs were submitted using accounts that fall entirely outside corporate governance.
Worse still, 21% of that sensitive data ended up on ChatGPT’s free tier—where prompts may be stored and used for training unless users explicitly opt out. For companies relying on internal AI policies or assuming corporate firewalls offer protection, these numbers are a wake-up call. Employees are finding ways around restrictions, often without understanding the data privacy trade-offs.
Why Blocking AI Tools Won’t Work
Blocking public LLMs or GenAI tools might feel like the safe move—but it’s not a sustainable one. Since most SaaS platforms now bake AI directly into their workflows, drawing a line between “AI” and “non-AI” tools no longer makes sense.
More importantly, blocking tools doesn’t stop usage—it just drives it underground. Employees will switch to personal devices, jump off the VPN, or use mobile apps to access their favorite AI tools. The result? Less oversight, more risk, and complete loss of control over how sensitive information is handled.
A Smarter Strategy: Secure, Not Strict
The path forward requires a mindset shift. Rather than saying no to AI, security and IT leaders must lean in, monitor AI tool usage, and create secure frameworks that guide adoption. That means implementing enterprise licenses, setting clear usage policies, and monitoring for shadow AI activity.
If done right, this proactive approach transforms security from being the “department of no” into a key enabler of innovation. With the right guardrails, teams can enjoy the benefits of AI without putting sensitive data at risk. In fact, the security team can become a strategic driver for smarter, safer AI adoption—helping the business move faster, not slower.