Video: High-Performance Memory For AI And HPC

Print Friendly, PDF & Email

Frank Ferro from Rambus

In this video, Frank Ferro, senior director of product management at Rambus, examines the current performance bottlenecks in high-performance computing, drilling down into power and performance for different memory options, and explains what are the best solutions for different applications and why.

HBM2E offers the capability to achieve tremendous memory bandwidth. Four HBM2E stacks connected to a processor will deliver over 1.6 TB/s of bandwidth. And with 3D stacking of memory, high bandwidth and high capacity can be achieved in an exceptionally small footprint. Further, by keeping data rates relatively low, and the memory close to the processor, overall system power is kept low.

According to Ferro, the design tradeoff with HBM is increased complexity and costs, as implementation and manufacturing costs are higher for HBM2E than for memory using traditional manufacturing methods as in GDDR6 or DDR4.

However, for AI training applications, the benefits of HBM2E make it the superior choice. The performance is outstanding, and higher implementation and manufacturing costs can be traded off against savings of board space and power. In data center environments, where physical space is increasingly constrained, HBM2E’s compact architecture offers tangible benefits. Its lower power translates to lower heat loads for an environment where cooling is often one of the top operating costs.

Frank Ferro is senior director of product marketing for IP cores at Rambus.

Sign up for our insideHPC Newsletter