Microsoft has collaborated with Ciena to put together a blueprint for a next-generation optical networks designed to keep AI traffic flowing, even when systems fail.
In a whitepaper explaining the partnership, Microsoft and Ciena explain how their tiered, zero-trust architecture, built on proven Ciena platforms, promises “survivability by design,” ensuring networks remain operational during both planned maintenance and unexpected outages.
The move comes as hyperscalers increasingly control network design and enterprises demand always-on connectivity to power the AI boom.
The AI optics surge has largely focused on hyperscaler data centres, but its effects are now reaching communications service providers (CSPs) that supply dark fibre, wavelength services, routed capacity, and dedicated private networks.
As AI workloads shift from training to inferencing, connectivity needs are spreading across metro and long-haul networks, touching enterprises, content-neutral providers, and other ecosystem players.
Market research firm Heavy Reading, in collaboration with Ciena, surveyed CSPs worldwide to quantify the impact of AI on network traffic. Results show that AI is expected to make up a significant portion of network demand over the next three years. In metro networks, around half of respondents anticipate AI will account for more than 30% of traffic, with nearly one-fifth expecting it to exceed 50%. Long-haul networks show an even larger impact: almost one-third of CSPs expect AI to constitute more than half of total traffic. North American operators are particularly bullish, with nearly two-thirds predicting AI will drive more than 30% of network usage, compared with just over one-third of peers in the rest of the world.
CSPs view high-bandwidth wavelength services as the backbone of AI connectivity. Links of 100G and above are expected to dominate, while dark fibre and dedicated private builds, though smaller in scale, remain important for rapid deployment or regulatory-restricted regions. Surveyed operators highlighted low latency, high bandwidth, resiliency, and service-level monitoring as critical features for enterprise AI customers.
Microsoft’s architecture addresses these priorities. It isolates human errors, allows flexible maintenance windows, and guarantees zero downtime during site migrations. It also supports a broad range of bandwidths—from 10G to 400G—across ROADM and channel multiplexer systems, giving CSPs the agility needed for the AI era.
The data shows CSPs are preparing to play a broader role in AI connectivity beyond hyperscaler data centres, particularly for enterprise clients. As AI applications proliferate, networks must combine speed, resilience, and flexibility, a challenge Microsoft’s new blueprint aims to meet head-on.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by visiting our LinkedIn page.