AI can help with supporting and scaling data centre infrastructure, writes Marie Hattar, SVP at Keysight Technologies
AI’s insatiable thirst for computing resources is transforming infrastructure, and the industry is grappling with how to meet the power, scalability, and efficiency demands. This has spurred an influx of investment to reconfigure data centre architectures to address these and other technology requirements.
The crux of the issue is that creating the intelligence requires immense computational capacity. With AI complexity increasing by orders of magnitude each year, data centres need to scale rapidly. To provide perspective, demand is growing so quickly that by 2027, AI workloads will consume more energy than Argentina does.
One size doesn’t fit all
AI is redefining the architecture of every type of data centre: hyperscalers, onsite, colocation, and Edge. Most of the attention to date has focused on the hyperscalers arms race. The exponential demand for computational resources is creating AI clusters with sites exceeding 1GW of capacity. McKinsey predicts that by 2030, over 60% of AI workloads in Europe and the US will be hosted on hyperscale infrastructures.
Hyperscale to Edge: the architectural spectrum
Data centres must be able to support AI workloads like training large language models (LLMs). This involves overhauling facilities’ design and architecture. Power capacity must increase to 200-300kW per rack to support intensive compute, and enhanced cooling solutions that are required at these densities. Specialised hardware like GPUs and TPUs must be integrated along with expanding storage systems to manage the vast volume of data.
Disaggregated architectures are being deployed so hardware can be managed and scaled independently, enabling different workloads to utilise resources efficiently. Network architectures require updates to handle AI traffic patterns, or AI clusters may become digital gridlocks—processing powerhouses paralysed by data bottlenecks.
In addition to hyperscale facilities, AI is driving demand for decentralised infrastructure to support data processing locally. This requires centres designed for Edge workloads – high performance within a smaller physical footprint and lower energy consumption. By 2030, as more processing shifts to the edge, that market is expected to exceed $160 billion.
This growth is driven by the need to support real-time processing closer to end users for applications like autonomous driving, where faster decision-making is paramount. This approach reduces latency and supports the hyper-connected world fuelled by IoT and 5G technologies.
As AI adoption matures, inference workloads are growing at a significantly faster rate than training models. Infrastructure needs to account for this shift from training to inference, which DeepSeek R1 and OpenAI v3 rely on. These reasoning systems utilise a trained model that evaluates live data to make a prediction or solve a task efficiently.
Connected devices at the edge will generate much of the data. Therefore, facilities need enough scale to support low latency networks with flexible resource allocation. This will allow them to account for the unpredictable spikes in demand for inference.
Scaling for and with AI
Paradoxically, AI is the problem and the solution. Intelligence is vital to solving scaling challenges and ensuring efficient operations. AI can modernise data centres in numerous ways including:
Improving energy efficiency is essential for sustainable operations. Deploying AI can automatically adjust cooling systems and server workloads to meet spikes in demand. Implementing intelligent energy-saving techniques helps minimise waste and operational costs while maintaining performance levels; Google reduced energy cooling by 40% in its data centres.
Predictive maintenance uses machine learning to anticipate problems before they occur. This minimises downtime and helps extend infrastructure life. With the size and costs involved in scaling, the ability to proactively schedule repairs and updates to optimise resource utilisation has a material impact.
Digital twins augmented with AI create dynamic models to test and validate components and systems. These solutions can be used to ensure that complex data centres are robust, resilient, and able to support future demands. The AI algorithm analyses historical data on performance and environmental conditions, providing insights to optimise operations. The solutions can use AI workloads to emulate network performance to find and fix potential bottlenecks. Advanced test and simulation tools are vital parts of the technology stack needed to create scalable, efficient, and reliable infrastructure.
AI will speed the road towards fully autonomous, intelligent data centres that handle nearly all operations—from monitoring to maintenance to networking to energy management and security—with minimal human input.
Future proofing AI infrastructure
As AI matures, data centres must accommodate increasingly complex workloads. Operators are clamouring to scale their infrastructure in a sustainable manner to support the demands without sacrificing performance or reliability. With much of the AI roadmap still hazy, creating flexible and resilient infrastructure that can easily adapt is vital.
The ability to balance hyperscale muscle with Edge agility orchestrated by AI systems will distinguish the winners from the losers. Those providers that embrace this reality will thrive in the AI revolution, while others will power down.

Marie Hattar is the Senior Vice President and Chief Marketing Officer of Keysight Technologies and is responsible for driving the company’s brand and global marketing results. Marie leads Keysight’s corporate positioning, messaging, and communication to internal and external stakeholders to create a competitive advantage for the company’s growth initiatives.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.