[SPONSORED GUEST ARTICLE] The explosive growth of artificial intelligence (AI) is reshaping what’s possible across industries, but it’s also exposing fundamental weaknesses in data center connectivity solutions. AI clusters today are bottlenecked by the bandwidth, reach, and latency of copper interconnects, which are limiting the efficiency and profitability of token generation in AI factories.
As the focus of AI architectures shifts from training to cost- and throughput-efficient inference, the focus to address these constraints must shift from building “bigger chips” to a complete reimagining of AI infrastructure. While the battle for AI supremacy may seem to be about models or algorithms, it will ultimately be won at the infrastructure layer with co-packaged optics (CPO).
The Scaling Challenge in AI
AI models will soon see at least 100x increases in compute and memory requirements. As clusters reach thousands or millions of XPUs (CPUs, GPUs, accelerators), copper-based electrical interconnects face bandwidth, reach, and power constraints that limit data movement and undermine AI system scalability and efficiency.
As a result, AI architectures are approaching an inflection point, and that is why a new generation of scale-up architectures is needed. Enabling thousands of XPUs to operate as one massive chip for trillion-parameter AI models demands solutions designed specifically for efficiency, performance, and scale.
Ayar Labs has laid out a three-stage roadmap for enabling these next-gen AI compute architectures:
- Scale-Out: Building on proven optical Ethernet fabrics already used for inter-rack scale-out and scale-across connectivity
- Scale-Up: Migrating to CPO-based scale-up architectures across multiple racks for XPU-to-XPU communication
- Extended Memory: Building extended memory systems that leverage CXL and high-bandwidth, low-latency optical links to maximize compute efficiency across racks and data centers
This progression sets the stage for the economic and performance milestones that analysts expect will drive widespread CPO adoption by 2028.
The Promise of Co-Packaged Optics
CPO has already moved from the lab into the spotlight for AI infrastructure. By integrating optical connectivity directly with compute, CPO unlocks the bandwidth density, power efficiency, and latency performance that copper and traditional pluggable solutions cannot match. This enables close and efficient communication between compute units, paving the way for multi-rack scale, chip-like performance for increasingly larger AI models.
Solutions like Ayar Labs’ TeraPHY™ optical engines provide this multi-rack scale connectivity with much lower power consumption and greater reliability. CPO isn’t just an incremental step up; it is a generational leap beyond, enabling the deployment of thousands of XPUs within a single scale-up domain within GPU cluster sizes in the millions.
The Path to Extended Memory
The future of AI infrastructure is not just about scaling compute but scaling memory as well. The limitations of high bandwidth memory (HBM) next to GPUs is creating bottlenecks as the memory needs of multi-trillion parameter models increase. The industry is setting its sights on creating massive, extended memory banks located away from the GPU, connected using CXL and high-bandwidth, low-latency optical links.
These integrated, ultra-high-bandwidth systems are the foundation for tomorrow’s AI hardware and models. Only CPO-enabled architectures can deliver the throughput and cost and power efficiency required for these workloads, ushering in the next era of AI advancement and redefining what’s possible in the race for AI supremacy.
To learn more, join Ayar Labs, Astera Labs, and Alchip Technologies at 9:00–10:00 a.m. Pacific time on Wednesday, November 5, 2025, for our webinar “Next-Gen AI Architecture Through Co-Packaged Optics.” Learn more about:
- Solutions to architecting XPU clusters that behave like one giant chip across racks
- Engineering requirements for 100Tb/s+ XPU-to-XPU connectivity
- Power and latency requirements for sustainable 100MW+ AI factories
- Economic tipping points driving CPO adoption in 2027-2028
Moderator:
Timothy Prickett Morgan, Co-Editor, The Next Platform
Speakers:
- Erez Shaizaf, CTO, AIchip Technologies
- Adit Narasimha, VP/GM of Emerging Technologies, Astera Labs
- Vladimir Stojanovic, CTO and Co-Founder, Ayar Labs



