UK Prime Minister (PM) Keir Starmer recently announced the nation’s plan to “turbocharge” AI as it will take forward 50 recommendations set out by Matt Clifford in his AI Opportunities Action Plan. IoT Insider spoke to experts about the new plan to gain an understanding of industry’s thoughts, and their responses ranged from outright enthusiasm to more cautious warnings about AI being manipulated to perpetuate attacks.
How the UK plans to use AI
Part of the plan will focus on AI’s ability to resolve challenges for the NHS, the UK’s health service which has experienced significant pressures in recent years including long wait lists, a growing population and a shortage of staff. The hope is that AI can deliver tasks such as diagnosing breast cancer more quickly and discharging patients more quickly too.
Jack Kerr, Director at Appodome welcomed the use of AI as a tool to improve healthcare but warned against it being weaponised by threat actors.
“The security risks associated with AI should be given the same attention as the UK AI Plans,” Kerr stated. “Cybercriminals are increasingly targeting critical systems such as hospitals, employing sophisticated tactics that include distributing links to fraudulent websites or fake mobile apps via phishing or smishing.”
The news comes as last year, NHS hospitals were hit with a slew of attacks – resulting in cancelled appointments. This spelled bad news for the NHS, which has been facing a considerable backlog and has been looking for opportunities to cut back on waiting times.
“The government needs to strengthen critical national infrastructure (CNI) infrastructure to ensure patients and their privacy are protected,” Kerr warned.
Other plans centre on supporting teachers with marking, to speed up admin, and using AI to detect potholes on roads.
“Artificial Intelligence will drive incredible change in our country. From teachers personalising lessons, to supporting small businesses with their record-keeping, to speeding up planning applications, it has the potential to transform the lives of working people,” said PM Keir Starmer in his address. “But the AI industry needs a government that is on their side, one that won’t sit back and let opportunities slip through its fingers. And in a world of fierce competition, we cannot stand by. We must move fast and take action to win the global race.”
Key changes including new AI Growth Zones, increasing public compute capacity, appointing Matt Clifford as advisor to the PM on AI opportunities; creating a new National Data Library; and a dedicated AI Energy Council chaired by the Science and Energy secretaries.
AI as a force for good – or for evil?
In spite of the UK government’s enthusiasm about AI, others were more cautious. Ulf Persson, CEO of ABBYY said: “To maximise effect and to reduce disruption there needs to be a focus on developing purpose-built AI systems for specific tasks, rather than relying heavily on generalised AI tools, would be welcome.
“It is important not to forget the implications for the workforce as AI takes over routine tasks. The transition will require a significant investment in retraining and upskilling the workforce, a shift that will be challenging but achievable if the investment goes to the right places.”
Matt Harris, SVP and Managing Director UKIMEA at HPE said he was “delighted” to see the UK government taking steps towards using AI as a means to drive economic growth and innovation.
“We at HPE understand the importance of dedicated AI compute capabilities and applaud the government’s announcement to create a new state of the art supercomputing facility of our national AI Research Resource,” he said. “Already, AI is an important part of the compute landscape and will only grow in significance in the next few years.”
Others warned that security had to be front of the mind at all times, echoing Kerr’s acknowledgement that AI can just as easily be weaponised as it can be a force for good.
“It is vital that security is at the heart of these developments to ensure that AI systems that are being developed and deployed aren’t posing dangerous security risks,” said Andy Ward, SVP International, Absolute Security. “While the intention of becoming a global AI leader is encouraging, it requires the government, NCSC and industry to ensure that AI rollouts consider the security risks posed and put in place safeguards to provide additional business protections.”
The government made no specific mention of the security aspect in the press release put out about the announcement, but a testimony shared by Mike Beck, Global Chief Information Security Officer, Darktrace, sought to reassure companies and consumers “that AI innovation is safe and secure”.
“The upcoming Cyber Security and Resilience Bill offers the opportunity to better safeguard data and AI infrastructure,” he continued, “and it will be important to ensure a more digitised and AI-enabled public sector is secure and trusted.”
There have already been warnings about the risk AI poses, if it is used to carry out attacks rather than defend them. Some have said that the technology is not at a point where it can be used to support threat actors, but that, for example, the attackers who are savvy with using AI are arguably going to replace those who aren’t.
The UK plans represent a recognition of the role AI will fundamentally play in our future – and the question of how far-reaching it can be applied continues to be explored.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.