A recent research report commissioned by Hewlett Packard Enterprise (HPE) reveals that only 32% of IT leaders in the UK and Ireland believe their organisations are fully prepared to harness the benefits of AI, despite 96% having initiated or completed the establishment of AI goals. The report highlights critical gaps in strategy, including a lack of alignment between processes and metrics, leading to a fragmented approach that may hinder successful AI delivery.
The report, ‘Architect an AI Advantage’, surveyed nearly 400 IT leaders across the UK and Ireland. It found that while there is a clear commitment to AI with growing investments, businesses are neglecting key areas that could impact their ability to deliver successful AI outcomes. These areas include low levels of data maturity, potential weaknesses in networking and compute provisioning, and vital considerations around ethics and compliance. The report also identified significant disconnects in both strategy and understanding, which could negatively affect future returns on investment (ROI).
“It’s unsurprising that our research reported that 94% of businesses are planning to increase their AI budgets this year,” said Matt Armstrong-Barnes, Chief Technologist for AI, Hewlett Packard Enterprise. “However, what may be surprising is that businesses are investing in AI without first taking a holistic view of the technology and how to implement it. Diving in before considering whether they are set up to benefit from AI and who needs to be involved in its roll-out will lead to misalignment between departments and fragmentation that limits its potential.”
Addressing low data maturity
Strong AI performance relies on high-quality data input, yet the research shows that although organisations recognise this—identifying data management as critical for AI success—their data maturity levels remain low. Only 6% of organisations can execute real-time data transfers to drive innovation and external data monetisation, and just 29% have established data governance models capable of supporting advanced analytics.
More concerning is that fewer than six in ten respondents said their organisation is fully capable of managing the key stages of data preparation for AI models, including accessing (57%), storing (51%), analysing (54%), and processing (52%) data. This gap not only risks delaying AI model development but also increases the likelihood of inaccurate insights and a negative ROI.
Provisioning for the AI lifecycle
A similar disparity was found regarding compute and networking requirements throughout the AI lifecycle. Although 92% of IT leaders believe their network infrastructure can support AI traffic and 83% think their systems have sufficient flexibility in compute capacity for various AI stages, less than half admitted to fully understanding the demands of different AI workloads across data acquisition, model training, and monitoring. This raises serious concerns about how accurately they can provision for these needs.
Gartner predicts that “GenAI will play a role in 70% of text- and data-heavy tasks by 2025, up from less than 10% in 2023,” yet many IT leaders lack a comprehensive understanding of what the demands of various AI workloads might entail.
Overlooking cross-business integration, compliance and ethics
Many organisations are failing to integrate key business areas, with over a quarter (28%) of IT leaders describing their AI approach as “fragmented”. As evidence, more than a third (38%) of organisations have developed separate AI strategies for individual functions, while another 38% have set different goals altogether.
Even more troubling, ethics and compliance are being largely ignored despite increasing scrutiny from consumers and regulatory bodies. The research shows that legal/compliance (15%) and ethics (14%) were considered the least critical factors for AI success by IT leaders. Additionally, nearly one in four organisations (20%) are not involving legal teams in their AI strategy discussions at all.
The risks of overconfidence
As businesses rush to capitalise on AI, failing to establish proper AI ethics and compliance frameworks could expose proprietary data, compromising competitive advantage and brand reputation. Without an AI ethics policy, there is a risk of developing models that fail to meet compliance and diversity standards, potentially leading to negative brand impacts, loss of sales, or costly legal disputes.
The report further emphasises that the quality of AI models is directly linked to the quality of the data they use. Combined with the finding that less than half of IT leaders fully understand the IT infrastructure demands across the AI lifecycle, this raises the risk of developing ineffective models, including the threat of AI hallucinations. Moreover, the significant power demands for running AI models could unnecessarily increase data centre carbon emissions, further reducing the ROI from AI investments and potentially damaging the company’s brand.
“If business continue their current approach to AI, it will adversely impact their long-term success,” added Armstrong-Barnes. “They must adopt a comprehensive end-to-end approach across the full AI lifecycle to streamline interoperability and better identify risks and opportunities. Considering AI – especially GenAI – is data, power, time and resource intensive to deploy and maintain, businesses need to take the necessary steps and lay the groundwork for their deployments so they don’t run before they can walk.”
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.