SK hynix Announces CXL SDK Available on Linux

SEOUL, September 23, 2024 – SK hynix Inc. announced today that features of its Heterogeneous Memory Software Development Kit (HMSDK) software for optimizing Compute Express Link memory technology , are now available on Linux. SK hynix said HMSDK enhances the memory package’s bandwidth by over 30 percent without modifying existing applications by selectively allocating memory […]

SK hynix Partners with TSMC on HBM

Seoul, April 19, 2024 – Memory company SK hynix Inc. announced today a memorandum of understanding with TSMC to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4, or the sixth generation of the HBM family, slated to be mass produced […]

SK hynix to Invest $3.9B in Indiana HBM Fab and R&D with Purdue

Memory chip company SK hynix announced it will invest $3.87 billion in West Lafayette, Indiana to build an advanced packaging fabrication and R&D facility for AI products. The project, which the company said is the first of its kind in the U.S., will be an advanced….

AMD-Powered LUMI Supercomputer: In the Vanguard of HPC Performance and Energy Efficiency

[SPONSORED GUEST ARTICLE]   LUMI is a model for both world-class supercomputing and sustainability. It also embodies Europe’s rise on the global HPC scene in recent years. The AMD-powered, HPE-built system, ranked no. 3 on the new TOP500 list of the world’s most powerful supercomputers, also ranks no. 7 on the GREEN500 list of the most […]

Recent Results Show HBM Can Make CPUs the Desired Platform for AI and HPC

Third-party performance benchmarks show CPUs with HBM2e memory now have sufficient memory bandwidth and computational capabilities to match GPU performance on many HPC and AI workloads. Recent Intel and third-party benchmarks now provide hard evidence that the upcoming Intel® Xeon® processors codenamed Sapphire Rapids with high bandwidth memory (fast, high bandwidth HBM2e memory) and Intel® Advanced Matrix Extensions can match the performance of GPUs for many AI and HPC workloads.

GUC Tapes Out AI/HPC/Networking Platform on TSMC CoWoS Technology with 7.2 Gbps HBM3 Controller and PHY, GLink-2.5D and 112G-LR SerDes IPs

Hsinchu, Taiwan – June 8, 2021 – Global Unichip Corp. (GUC), the Advanced ASIC Leader, announced today that it has successfully taped out AI/HPC/Networking CoWoS Platform with 7.2 Gbps HBM3 Controller and PHY, GLink-2.5D and third-party 112G-LR SerDes IPs. The main die of the platform contains the world’s first HBM3 Controller and PHY IP with […]

ISC 2019 Recap from Glenn Lockwood

In this special guest feature, Glenn Lockwood from NERSC shares his impressions of ISC 2019 from an I/O perspective. “I was fortunate enough to attend the ISC HPC conference this year, and it was a delightful experience from which I learned quite a lot. For the benefit of anyone interested in what they have missed, I took the opportunity on the eleven-hour flight from Frankfurt to compile my notes and thoughts over the week.”

Creating Balance in HPC on the Piz Daint Supercomputer

The flagship supercomputer at the Swiss National Supercomputing Centre (CSCS), Piz Daint, named after a mountain in the Alps, currently delivers 7.8 petaflops of compute performance, or 7.8 quadrillion mathematical calculations per second. A recently announced upgrade will double its peak performance, thanks to a refresh using the latest Intel Xeon CPUs and 4,500 Nvidia Tesla P100 GPUs.

Video: AMD’s next Generation GPU and High Bandwidth Memory Architecture

“HBM is a new type of CPU/GPU memory (“RAM”) that vertically stacks memory chips, like floors in a skyscraper. In doing so, it shortens your information commute. Those towers connect to the CPU or GPU through an ultra-fast interconnect called the “interposer.” Several stacks of HBM are plugged into the interposer alongside a CPU or GPU, and that assembled module connects to a circuit board. Though these HBM stacks are not physically integrated with the CPU or GPU, they are so closely and quickly connected via the interposer that HBM’s characteristics are nearly indistinguishable from on-chip integrated RAM.”