AI-driven attacks have changed the landscape: Q&A with Matthew Martin

Matthew Martin, Founder and CEO of Two Candlesticks is no stranger to the various ways cyber attackers seek to gain access into businesses and extort them

As a cybersecurity consultancy working across the US, UK and Africa, Matthew Martin, Founder and CEO of Two Candlesticks is no stranger to the various ways cyber attackers seek to gain access into businesses and extort them, usually for financial gain, increasingly for political reasons – as nation-state attacks are on the rise.

AI-driven attacks represent a new area in which attackers are leveraging the technology to perpetuate attacks, and as such, must be considered a priority for businesses when looking at what cybersecurity measures they have in place. They were named by 51% of respondents in a report delivered by Keeper Security in 2024 as part of emerging attack vectors IT leaders were witnessing firsthand.

Matthew Martin, Founder and CEO, Two Candlesticks

For Martin, whose customers are small to medium-sized companies, working with historically under-served markets, translating what AI-driven attacks means for these different businesses starts with the education piece.

“There was the Data Protection Africa Summit … last year [it] was in Uganda, and the day before the conference, I did a free masterclass for … about 60 executives around the country that came in,” he explained. “It was walking them through that because … a lot of these concepts are foreign to them, and they don’t really understand how it impacts them.” 

But once he helps someone to understand what AI-driven attacks are and what risk they pose, he can then translate it to how it affects the individual and their company, and where their use case is concerned.

Examples of AI-driven attacks

Arguably, AI-driven attacks are a new and emerging threat, propelled by the increasing sophistication of what AI tools can do. The launch of ChatGPT in 2022 has been heralded by some as a landmark moment in which AI technology was crystallised for a lot of people as they understood what it can do: as a Generative AI-powered chatbot, ChatGPT’s capabilities extend to generating text, images, and perhaps even written code. 

It also goes to show that the classic threats that might have been used to gain access into a business – like using phishing emails to get someone to click on a link to gain a free iPhone – have evolved with the application of AI.

AI-generated phishing emails could, for instance, look much more realistic and credible.

“It’s going to be written in the tone of the person you think it’s coming from. It’s going to … learn how to speak and how to write based on … things from that person,” said Martin. “You’re not going to be able to tell it [is a phishing email] just by looking at it.” 

Martin teaches his clients how to identify these emails, such as looking at who sent it and the domain it came from, as these can be easy steps to put into practice that don’t involve technical skills; as well as teaching the practice of callbacks.

“In [the] financial services, in cybersecurity, we’ve done this for decades,” he added. “When somebody calls up and asks for … money or emails, we call them back on a known number. They should do the same thing. If they’re getting asked by a friend or somebody to send them $500, then they can … hang up and call them back on a known number.”

Employing these tips and tricks can help day to day, as well as make it more specific for the industry and use case, such as explaining how AI can be used to manipulate security cameras in manufacturing facilities or warehouses.

Martin neatly encapsulated how the cybersecurity landscape has fundamentally changed, adding, “We taught people how to spot fake emails. It was about looking for bad grammar … You can’t do that anymore.” 

Patterns in AI-driven attacks

In terms of trends and commonalities in AI-driven attacks, Martin noted deepfakes, alongside phishing, as one growing area that has been used as a tool to perpetuate high-profile attacks.

“If I was going to tell anybody … where to focus, it would be that: deepfakes and email,” he said. “[You don’t need] some kind of crazy technology solution to solve it. It’s fundamental good security hygiene that we’ve done forever.”

From Martin’s perspective, working with small to medium-sized companies means he doesn’t necessarily see the cutting-edge attacks and therefore the questions he asks his customers are different to that of a cybersecurity consultancy servicing a governmental institution, for example.

The high-profile cyber attack committed against M&S, and the figures reported, approximately £300 million lost in revenue, may be harder to translate for the smaller companies, whose profits don’t hit anywhere near the hundreds of millions mark. But this doesn’t mean the attacks perpetuated against them are any less devastating. 

This doesn’t mean that the attacks perpetuated on smaller companies are any less devastating with regards to impact.

“I have a client in Canada … they’re a very small company, [around] 14 employees. They got compromised through their email system and lost … somewhere around 300,000 in a weekend.

“That’s hugely impactful to them,” Martin continued. “Not only that, because they got into their email systems, they lost access to all of their company credit cards that their employees were using.” 

This demonstrates that not only large companies are exposed to “catastrophic cyber attacks”, but that smaller companies are on the radar of cyber attackers as well.

Employing good security hygiene

Martin had a few takeaways on how a company could improve their cybersecurity posture, with the first step involving developing a risk assessment.

“Doing the risk assessment helps you understand where your important stuff is,” he said. “What are my important processes? At that point … when you can understand that, then you can start to say, ‘Okay, how could an attack come in?’”

Because of AI-driven attacks, companies need to integrate it into their posture by understanding how AI can be used to attack them, as well as how they themselves are using AI and what AI their vendors are using. 

“When we do third party risk assessments, which we’ve all done forever, we have to ask that question of what AI we’re using, and where does it come from, and how does that algorithm work, and all of that stuff,” added Martin, noting that if an AI technology that you use is exploited for an attack, you may be affected in some way.

“It’s [AI] is in everything,” Martin continued. “What’s interesting is I saw a study with McKinsey … 80% of organisations, when asked about their AI … said that there was no material benefit from using it. So every single company is trying to use it, very few are actually using it in a way that’s beneficial to the company.” 

This then raises questions of when and how AI should be applied, with Martin saying that we were past the “exploratory phase” point of AI and needed to start looking at how we could derive value from its application.

In his closing remarks, Martin said that the “fundamentals” of cybersecurity still ring true: “There are so many companies that try to chase the shiny new security thing, and then they’re not patching, or they’re not doing good access management processes, or they’re not doing good data classification.

“You need to apply them [fundamentals] to AI like you would any other new application … it’s something that’s really hyped up and really powerful, but it’s still the basics of what you need to do.”

There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by visiting our LinkedIn page.

Exit mobile version