The threats are external, not internal, shows new report from Exabeam

Based on a survey of 1,010 cybersecurity professionals across key sectors, the research shows that insider threats have overtaken external attacks

Exabeam recently announced the findings of its new multinational report, ‘From Human to Hybrid: How AI and the Analytics Gap Are Fueling Insider Risk’. Based on a survey of 1,010 cybersecurity professionals across key sectors, the research shows that insider threats have overtaken external attacks as the top security concern, with AI accelerating the shift.

According to the findings, 64% of respondents now view insiders, whether malicious or compromised, as a greater risk than external actors. Generative AI (GenAI) is a major driver, making attacks faster, stealthier, and more difficult to detect.

“Insiders aren’t just people anymore,” said Steve Wilson, Chief AI and Product Officer, Exabeam. “They’re AI agents logging in with valid credentials, spoofing trusted voices, and making moves at machine speed. The question isn’t just who has access — it’s whether you can spot when that access is being abused.”

Insider threat growth shows no signs of slowing

Insider activity is intensifying across industries, driven by both malicious intent and accidental compromise. Over the past year, more than half of organisations (53%) have seen a measurable increase in insider incidents, and the majority (54%) expect that growth to continue. Government organisations are bracing for the steepest rise (73%), followed by manufacturing (60%) and healthcare (53%), fuelled by expanding access to sensitive systems and data.

This surge is not uniform across the board; risk trajectories vary sharply by geography and sector. Asia-Pacific and Japan, for instance, lead in projected insider threat growth (69%), reflecting heightened awareness of identity-driven attacks. The Middle East stands apart, with nearly one-third (30%) anticipating a decrease, a signal of either stronger confidence in current defences or a potential underestimation of evolving risks. These contrasts reflects the complexity of the insider threat landscape and the need for defence strategies that align with regional realities.

AI is powering faster, smarter, and stealthier insider attacks

AI has become a force multiplier for insider threats, allowing actors to operate with unprecedented efficiency and subtlety. Two of the top three current insider threat vectors are now AI-related, with AI-enhanced phishing and social engineering emerging as the most concerning tactics (27%). These attacks can adapt in real time, mimic legitimate communications, and exploit trust at a scale and speed human adversaries cannot match.

Unauthorised GenAI use compounds the challenge, creating a dual-risk scenario where the same tools meant to boost productivity can also be repurposed for malicious activity. More than three-quarters of organisations (76%) report some level of unapproved usage, with those in technology (40%), financial services (32%), and government (38%) experiencing the highest rates.

Regional variations are telling, as in the Middle East, unauthorised GenAI is the top insider concern (31%), reflecting both rapid AI adoption and the governance gaps that can follow. Globally, the convergence of insider access and AI capabilities is producing threats that evade traditional controls and demand more advanced behavioral detection.

Missing the mark on detection

While 88% of organisations say they have insider threat programs, most lack the behavioural analytics needed to catch abnormal activity early. Just 44% use user and entity behavior analytics (UEBA), the foundational capability for insider threat detection. Many continue to rely on identity and access management, security training, DLP, and EDR, tools that provide visibility but not the behavioral context necessary to spot subtle or emerging risks.

AI adoption is widespread: 97% of organisations use some form of AI in their insider threat tooling, yet governance and operational readiness lag far behind. More than half of executives believe AI tools are fully deployed, but managers and analysts say many are still in pilot or evaluation stages. Intensifying the challenge, security teams face persistent barriers: privacy resistance, fragmented tools, and difficulty interpreting user intent remain major blind spots.

“AI has added a layer of speed and subtlety to insider activity that traditional defences weren’t built to detect,” said Kevin Kirkwood, CISO, Exabeam. “Security teams are deploying AI to detect these evolving threats, but without strong governance or clear oversight, it’s a race they’re struggling to win. This paradigm shift requires a fundamentally new approach to insider threat defence.”

Closing the insider threat gap

As insider threats accelerate, driven by AI, identity misuse, and a lack of behavioural visibility, organisations that succeed will be those that align leadership priorities with operational reality. Progress will come from moving beyond surface-level compliance to approaches that focus on context, accurately distinguish between human and AI-driven activity, and foster collaboration across teams to close visibility gaps.

Bridging this divide requires more than policy changes. It calls for leadership engagement, cross-functional cooperation, and governance models that keep pace with the speed of AI adoption. Success will be defined by the ability to shorten detection and response times, reduce the window of opportunity for insider activity, and adapt strategies as threats evolve.

There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by visiting our LinkedIn page.

Exit mobile version