Artificial intelligence is reshaping workplaces at an unprecedented pace. But along with its productivity benefits comes a new challenge for companies: shadow AI.
Shadow AI refers to the unauthorised use of AI applications by employees—a more specific version of shadow IT, where staff use software or services that IT departments haven’t approved. According to a survey by Software AG, half of all knowledge workers—those who primarily work at a desk or computer—use personal AI tools.
Some do so because IT doesn’t provide alternatives; others simply want the freedom to choose their own.
“AI has made privacy suddenly much more urgent,” says Yoav Crombie, CEO and co-founder of AGAT Software, a company that has shifted its focus toward AI after 30 years in security. “Before, if you sent an email to the wrong person, the risk was limited. Now, one slip with AI can expose sensitive information to multiple people or even become part of another AI model.”
Modern AI tools are trained on huge amounts of information, some of which includes data companies would rather not share.
According to a survey of 1 million GenAI prompts and 20,000 uploaded files in 2025 by cyber security specialist Harmonic Security, nearly 22% of uploaded files and 4.73% of prompts contained sensitive content. This means sensitive corporate information could be incorporated into the AI and potentially appear in outputs for other users.
“Firms are rightly concerned,” Crombie says, speaking to IoT Insider at the IoT Tech Expo in London in February. “Their data ends up in services they don’t control, may not even be aware of, and that could be vulnerable to breaches.”
AGAT provides solutions aimed at giving companies control while allowing them to leverage AI.
“We offer two approaches,” Crombie explains. “One is a firewall and risk engine system that monitors AI usage in real time. It intercepts employee traffic, deciding what data can be shared and what cannot. The other is an on-premises AI platform for companies that need maximum security.”
The on-premises solution is designed for sensitive environments, such as manufacturing, finance, or IoT-heavy sectors. “It’s a set of AI services you deploy internally, with no internet connection at all,” Crombie says. “It includes chat, search, anomaly detection—you name it. We wrap open-source models from Google, Meta, or OpenAI inside our platform, so companies can analyse data safely without exposure.”
Crombie stresses that it’s not just about the data itself, but how employees are using AI. “It’s one thing to ask AI to improve English in a document,” he says. “It’s another to ask for legal advice or financial insights. Our risk engine evaluates the sensitivity of the content, the type of AI model, and even the employee’s role. It’s about controlling risk, not just blocking data.”
Even outside highly regulated industries, shadow AI is growing. “I have clients who aren’t governments or banks,” Crombie adds. “One is a cheese manufacturer—suddenly employees were using AI for everything, and everyone got concerned that company data was going all over the world. The risk of privacy exploded because AI is so accessible, and what you share can be amplified instantly.”
Employees, particularly younger ones, often turn to unauthorised AI because it’s convenient.
Crombie acknowledges this reality: “You can’t realistically stop employees from using AI. The goal isn’t to forbid it—it’s to give companies a way to use AI safely. Shadow AI is going to exist, so enterprises need platforms that control risk while allowing productivity.”
And as new, freely available AI models including DeepSeek, continue to come online, Crombie says shadow AI is likely to extend further. “The landscape changes constantly,” Crombie notes. “That’s why I tell companies not to lock themselves into long-term licences. Employees will want to experiment, and the tools they use today might be obsolete in three months.”
For Crombie and AGAT, the solution is clear: give companies visibility, control, and secure platforms that let them harness AI without compromising sensitive data. “AI is too powerful to ignore, but too risky to leave uncontrolled,” he says. “Shadow AI is here to stay, and firms need to be prepared.”
