Vanta released its third annual State of Trust Report, an in-depth analysis uncovering trends in AI, security, compliance, and trust from a survey of 3,500 IT and business leaders across the US, UK, France, Germany, and Australia.
Today, 69% of UK organisations say the security risks for their company have never been higher – a 15 point increase from 2024 when only 54% said the same. As AI-driven cyber threats proliferate, organisations admit they can’t keep up, with a majority (53%) of business and IT leaders warning that AI cyber threats are advancing faster than their security team’s expertise to deal with them. In the past year, half of all organisations reported an increase in AI-generated phishing (43%), AI-powered malware (44%), and AI-driven identity theft or fraud (43%).
On the other hand, companies leveraging AI agents to protect against AI-cyber attacks is increasing sharply, with eight in 10 leaders currently using AI agents or planning to this year. However, AI usage doesn’t match the understanding of the technology – particularly when it comes to agents with nearly two-thirds (61%) saying their use of agentic AI outpaces their understanding of it.
“AI has completely changed the security equation,’’ said Jeremy Epling, Chief Product Officer, Vanta. “It’s creating new risks at unprecedented speed, but it’s also one of the most powerful tools we have to strengthen defences and limit burnout for overworked security teams. The challenge now is balance – adopting AI in ways that enhance security without losing control or visibility. As evident in the State of Trust data, to really build lasting trust, we need frameworks to help ensure AI is reliable, secure, and verifiable in how it makes decisions.”
Agentic AI adoption is high, but control is low
To combat the surge of new attack vectors, security teams are trusting agentic AI with everything from decision making to security strategy. But a lack of governance threatens to do more harm than good:
- 80% of leaders are currently or planning to use AI agents to protect against AI-cyber attacks
- 58% say they trust agentic AI to override human decision-making in certain scenarios, like suspending a risky browser extension or session when a policy violation is detected
- 71% of teams feel comfortable with agentic AI giving input on security strategy
- But AI usage doesn’t match understanding – with nearly two-thirds (61%) saying their use of agentic AI outpaces their grasp of it
- And fewer than half (46%) have any framework in place to manage agentic AI use
Security theatre is getting in the way of real protection
The security paradox of AI means that as customers demand more proof of security, many teams are spending more time proving security, rather than improving it.
While eight in 10 believe improving security and compliance directly boosts customer trust, leaders say their organisations only spend half of what they should on security – dedicating 10% of IT budgets to security vs a 16% ideal. This amounts to 12 working weeks per year spent on compliance related tasks (vs 10 last year), and nine working weeks per year on vendor security reviews and risk assessments.
As a result, 56% say they spend more time proving security rather than improving it, with 63% saying today’s security frameworks feel like ‘security theatre’.
AI banishes cybersecurity team burnout
Amid growing compliance pressure, AI is both a relief valve and a reinvention tool. It’s helping overburdened teams do more with less, automating manual tasks and freeing up time for meaningful security work.
- 78% of security and compliance leaders say AI and automation tools are reducing burnout and improving day-to-day productivity
- 96% believe AI and automation have improved security team effectiveness
- One in two say that risk assessments and incident response times are faster and more accurate with AI
Vantacon 2025: How AI is rewriting trust
On 19th November, Vanta will host VantaCon 2025: How AI is Rewriting Trust, bringing together security’s brightest minds for a half-day of keynotes and panels exploring how AI is transforming trust, risk, and compliance.
Speakers including Alex Stamos, CSO at Corridor & Professor at Stanford, Former Chief of Security at Facebook; Jason Clinton, CISO, Anthropic; Jason Priest, VP, Security / CISO, 1Password; Mandy Matthew Lead Security Risk Program Manager, Duolingo and Andrew Becherer, CISO, Sublime Security.