Four in five believe DeepSeek needs regulating

Four in five UK CISOs believe DeepSeek, a China-based AI chatbot, must be urgently regulated by the UK government before it sparks a cyber crisis

Four in five UK CISOs believe DeepSeek, a China-based AI chatbot, must be urgently regulated by the UK government before it sparks a full-scale national cyber crisis. This is according to Absolute Security’s Risk Index report.

In response to growing risks, over a third (34%) have implemented full bans on AI due to cybersecurity concerns, while 30% say they’ve already pulled the plug on AI tools within their organisations.

These findings are from a recent survey commissioned by Absolute Security, which surveyed 250 UK CISOs at enterprise organisations to understand how businesses are coping with increasing cyber challenges in an AI-powered world.

DeepSeek has raised significant cybersecurity concerns due to its potential to expose sensitive data and be misused by cyber criminals, causing organisations and governments to reconsider their cybersecurity strategies.

Organisations are already struggling to cope with the increasing complexity of cyber threats. The added layer of AI-powered threats is prompting a re-evaluation of cyber defences.

Three out of five (60%) UK CISOs now predict a rise in cyber attacks as a result of DeepSeek, with another 60% say this AI technology is complicating privacy and governance frameworks.

42% of CISOs now see AI as a bigger threat than help to cybersecurity.

The readiness gap, as evidenced by the survey, is just as concerning, with nearly half (46%) of security leaders admitting their teams are not prepared to handle AI-driven threats. The rapid development of DeepSeek is outpacing their defences, creating a growing risk that many believe can only be managed through government regulation.

“Our research highlights the significant risks posed by emerging AI tools like DeepSeek, which are rapidly reshaping the cyber threat landscape,” said Andy Ward, SVP International, Absolute Security. “As concerns grow over their potential to accelerate attacks and compromise sensitive data, organisations must act now to strengthen their cyber resilience and adapt security frameworks to keep pace with these AI-driven threats. That’s why four in five UK CISOs are urgently calling for government regulation. They’ve witnessed how quickly this technology is advancing and how easily it can outpace existing cybersecurity defences.

“These are not hypothetical risks. The fact that organisations are already banning AI tools outright and rethinking their security strategies in response to the risks posed by LLMs like DeepSeek demonstrates the urgency of the situation. Without a national regulatory framework—one that sets clear guidelines for how these tools are deployed, governed, and monitored—we risk widespread disruption across every sector of the UK economy. The time for debate is over. We need immediate action, policy, and oversight to ensure AI remains a force for progress, not a catalyst for crisis.”

In spite of the risks, investment in AI talent is accelerating. 84% of organisations are prioritising the hiring of AI specialists in 2025, and 80% have committed to AI training at the C-suite level.

There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by visiting our LinkedIn page.

Exit mobile version