Tear Down These Walls: How CXL Could Reinvent the Data Center

Print Friendly, PDF & Email

By Raj Hazra, Micron

The explosion of data is intensifying demands on computation and data center infrastructure. We’re generating and analyzing more data than ever before, with the growing pervasiveness of AI workloads, and app-heavy devices – which will only expand with the rollout of 5G. While this transformation has the opportunity to deliver new insights and competitive advantage, the architectural foundation that has driven data center computing for the past two decades is challenged to keep pace with these sophisticated workload demands.

To meet the needs of these data-rich workloads, designing a platform architecture that is flexible and scalable is key. Future data centers need heterogeneous compute, a re-imagined memory and storage hierarchy, and an open, agnostic interconnect to tie it all together and enable composable systems that can evolve with workloads.

Moving to a World of Heterogeneous Computing

Heterogeneous computing has been around for decades. GPUs have worked in tandem with CPUs since the early 2000s to give a lift to graphics-intensive workloads like gaming. But the expansion of high-performance computing applications and rise of AI drove requirements for more computational resources. To power these new data-centric workloads more adeptly, enterprises are combining various types of CPUs, GPUs, AI accelerators, and FPGAs (or collectively, XPUs) which work together to do heavy lifting tailored to each application’s specialized needs.

Leveraging a range of XPUs in a heterogeneous environment opens the door to solve complex problems more efficiently and easily. The challenge is connecting these disparate compute and memory resources with an interface that can keep pace. Today, CPUs rely on a variety of interfaces for inter-processor communication. GPUs are often connected via PCI Express, memory is connected via DDR channels, and the list goes on. The move to heterogenous computing will require shifting some of these interconnects to a more performant industry standard interface enabling new capabilities like memory tiers, pooled memory, and even the convergence of memory and storage. And to unshackle architectural innovation and choice, we need an open standard with broad industry acceptance.

CXL Unlocks Platform Architecture Freedom

Enter the Compute Express Link (CXL). CXL is an open interface that standardizes a high-performance interconnect for data-centric platforms – it provides the ability to connect CPUs to XPUs, storage, memory and networking, enabling increased degrees of freedom for platform architecture via the ability to build more optimized infrastructures.

Micro’s Raj Hazra

This platform architecture freedom is why we think CXL is so significant – not just because of its capability and performance, but its ability to connect accelerators and several levels of memory and storage to the processor. It’s a necessary step forward to create more composable systems with interesting memory and storage hierarchies and address diverse workloads and enterprise needs that require a balance of performance and capacity, like:

  • Providing a memory-intensive application more capacity with tiers of memory to improve overall performance – such as a mix of lower-latency direct-attached memory and higher-latency large-capacity memory
  • Improving VM density by allowing a cloud provider to host more VMs by having more memory capacity, directly attached as well as attached over CXL to the processor
  • Hosting large databases with a caching layer provided by storage-class memory to improve the in-memory database performance, giving it a much larger memory footprint
  • Bringing enterprise-class infrastructure more efficiency with the use of CXL-based attached memory and storage

And, importantly, CXL is an open standard. Any company can join and contribute to the CXL Consortium, which includes every major company in our industry. At Micron, we’re excited to be deeply engaged with other industry leaders charting a path towards a CXL-enabled future, enabling an open, advanced ecosystem.

This collaborative innovation is critical for building flexible, optimized infrastructures for the data center. The biggest challenge I hear in speaking to CIOs is around predicting what infrastructure and capacities they will need for ever-changing workloads. In this fast-moving technology landscape, these needs are difficult to predict, but unfortunately today’s status quo forces IT teams to be deterministic in their architectures. CXL solves this challenge by enabling platform capability to compose systems and build optimized infrastructures as you go, providing the flexibility to evolve with changing business needs.

Predictions for the Future of CXL

Given this potential for enabling the re-architected data center, I have a few key predictions on what CXL will deliver:

  • CXL will enable memory zones. CXL will allow architects to create large pools of both volatile and persistent memory via a direct, high-speed interface. Memory of all types will extend into multiple infrastructure pools — and become a shared resource.
  • CXL will blur the lines of storage and memory, delivering more powerful compute hierarchies. CXL will open up possibilities for new, inventive types of memory and storage that don’t neatly fall into traditional categories. You’ll no longer think “DRAM for memory and NAND for storage.” Instead, there will be mixed usages for both, and the memory-storage hierarchy will be advanced with emerging technologies like storage-class memory.
  • CXL will be ubiquitous. As a truly open and ubiquitous interconnect, CXL may go beyond data center and cloud platforms to edge-based platforms. This pervasiveness will power emerging use cases across AI inference, in-memory databases and more. From edge to cloud, the industry will deliver innovation by blending unique solutions for these emerging workloads.

As CXL sees mainstream adoption in the data center in the next few years, the new ways that servers, storage and networking are architected will make 2021 look archaic. It will feel like looking at a car from 50 years ago, thinking “We used to build cars like that?”  CXL will deliver ultra-high-capacity memory and greater bandwidth, unify the data ecosystem, and unleash boundless architectural freedom and innovation — ultimately reinventing today’s data centers and meeting the demands of tomorrow’s data-centric workloads.

By Raj Hazra, senior vice president & general manager of Micron’s Compute & Networking Business Unit

Comments

  1. Hi Raj,
    I think you have well explained about how CXL could reinvent the data center in future.
    Thanks for sharing helpful content.