Penguin Computing’s Cluster Collaboration with AMD Makes Heterogeneous System Architecture Clustering a Reality

Print Friendly, PDF & Email

Today at SC14,  Penguin Computing announced the first application optimized accelerated processing unit (APU) clusters, making seamless GPU and CPU memory sharing on clusters a reality based on heterogeneous system architecture (HSA) from Advanced Micro Devices. The shared memory capability involves very lightweight context switches that switch instantaneously between the GPU and CPU, whichever code runs best at a given moment.

Today’s applications require the power efficiency and high performance found in highly parallel computing. However GPUs and CPUs present a bottleneck to efficient cluster functionality due to separate memory space. This architecture results in inefficient GPU/CPU communication, which is a challenge to scaling.

 

We are making these machines immediately available for evaluation as a tremendous tool for software development,” said Phil Pokorny, Chief Technology Officer, Penguin Computing. “HSA is a reality and our technology is already in the hands of major U.S. labs. Penguin Computing’s extensive experience in APU cluster development and implementation is instrumental in this progress, in addition to close collaborate with AMD.”

Initial feedback from early adopters reinforces our belief that this collaboration with Penguin Computing is an important step forward for the industry,” said Karl Freund, corporate vice president, Product Management and Market, Server Business Unit, AMD. “The potential of modern heterogeneous architectures is exciting, and collaborations such as these can result in significant steps forward in performance for a broad range of software applications.”

Named Jäätikkö, or iceberg in Finnish, the cluster is currently being demonstrated at SC14.

Visit AMD booth #839 at SC14.

Sign up for our insideHPC Newsletter.