The future of artificial intelligence infrastructure will be defined by liquid cooling, soaring rack densities, and unprecedented collaboration between chipmakers and data centre operators, senior executives from NVIDIA, Schneider Electric, and Sweden’s EcoDataCentre told an industry forum this week.
Speaking at the Schneider Electric Innovation Summit today, Vladimir Pradanovic, NVIDIA’s Head of Data Centre Readiness, said the company’s next generation of AI platforms will push data-centre power densities far beyond anything seen before.
“We started with air-cooled racks at 25 kilowatts, then 45 with the H100, and today we’re deploying systems that average between 125 and 250 kilowatts,” he said. “Next year, with the Vera Rubin platform, customers will be able to choose between two racks at 190 kilowatts each, or a single rack drawing 385 kilowatts. By 2027, we expect to reach one megawatt per rack.”
The rapid escalation in performance comes with profound engineering and logistical challenges. The liquid-cooled GB200 racks weigh roughly 1.6 tonnes each when empty, rising to more than 2 tonnes when filled with coolant and mounted within their steel frames. Transporting such hardware requires bespoke freight solutions.
No delivery trucks
“These aren’t something you can simply put in a delivery truck,” Pradanovic said. “Each rack has to travel in a pressurised nitrogen environment to avoid microleakage, and we use only certain aircraft — 747s, A380s, or dedicated cargo freighters — to move them safely. Even then, we limit each rack to 70% fluid capacity during transport, topping them up on-site.”
At the destination, the racks are craned directly onto reinforced flooring that can bear more than 2,000 kilograms per square metre. “Many existing data centres were never built for this,” Pradanovic added. “We’re seeing operators dig deeper foundations, rebuild plinths, and redesign corridors to accommodate the weight and liquid-handling systems.”
The racks arrive as sealed, nitrogen-filled units which engineers then integrate with the cooling distribution unit (CDU) and dielectric manifolds on site. “It’s like landing a small industrial machine inside a white space,” he said. “The logistics are as complex as the compute.”
NVIDIA’s shift towards full liquid cooling reflects the explosive growth of GPU-based compute required for training and inference workloads. Pradanovic said that roughly 87% of the GB200 and GB300 platforms are now liquid-cooled, a figure expected to rise to 100% within two years as components such as memory and network switches adopt cold-plate technology.
The shift marks a radical departure from the traditional data-centre model built around air-cooled servers consuming 10–15 kilowatts per rack. “Existing hyperscale facilities were never designed for these loads,” said Pradanovic. “Many will need a complete redesign if they are to host large-scale AI factories.”
EcoDataCentre, which operates from Falun in central Sweden, is positioning itself as one of Europe’s few purpose-built operators for high-performance computing. John Wernick, the company’s Chief External Relations and Sustainability Officer, said EcoDataCentre’s early bet on liquid cooling and sustainable construction was now paying off.

“We designed for liquid cooling back in 2019, long before it became mainstream,” he said. “When NVIDIA announced that Blackwell and GB200 would be liquid-cooled, we were probably the only provider that was genuinely thrilled.”
The company builds its data-centre structures from cross-laminated timber rather than concrete or steel, reducing embodied carbon emissions by about threefold. It also uses hydrotreated vegetable oil in its backup generators and directs more than half of its capital spending to local suppliers.
EcoData Centre has become a key site for DeepL, the German language-AI company whose translation engine competes with Google Translate. DeepL was an early adopter of NVIDIA’s A100 and H100 GPUs and is now among the first to deploy the new GB200 “superpods”. According to Wernick, the latest system allows DeepL to “translate the entire internet in 18 days”.
The companies also emphasised the growing importance of waste-heat recovery. As rack temperatures rise, the commercial viability of district-heating integration improves. “We’re now able to deliver much higher volumes of energy to local heating networks,” said Wernick. “That has a real impact for residential areas connected to the grid.”
For Schneider Electric, which provides power and cooling systems for many of these facilities, the AI boom has driven a transformation in data-centre design philosophy. “We’re moving from large, lightly loaded halls to smaller, denser AI factories,” said Steve Carlini, the company’s Chief Advocate for AI and Data Centres. “It’s a complete reinvention of the physical layer.”
Power distribution
Power distribution is also evolving. NVIDIA and its partners are developing 800-volt direct-current systems to replace the current 400-volt standard, reducing copper cabling bulk and improving efficiency. “We expect to see 800-volt infrastructure becoming mainstream by 2027 or 2028,” Pradanovic said.
Despite the complexity, the economic logic is clear. A single day of downtime on a 4,000-GPU cluster can cost around $300,000 in lost compute revenue. “That’s why monitoring, commissioning, and predictive maintenance are so vital,” said Pradanovic. NVIDIA is using large-language-model techniques to predict optimal cooling and pressure parameters within its AI factories, effectively using AI to optimise AI infrastructure.
As Pradanovic summed up, “The journey continues. These AI factories are like Formula One teams – every second counts, and every component must perform flawlessly.”
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.