Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Overview of the HGX-1 AI Accelerator Chassis

“The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.”

Radio Free HPC Looks at Azure’s Move to GPUs and OCP for Deep Learning

In this podcast, the Radio Free HPC team looks at a set of IT and Science stories. Microsoft Azure is making a big move to GPUs and the OCP Platform as part of their Project Olympus. Meanwhile, Huawei is gaining market share in the server market and IBM is bringing storage to the atomic level.

Nvidia Brings AI to the Cloud with the HGX-1 Hyperscale GPU Accelerator

Today, Microsoft, NVIDIA, and Ingrasys announced a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. “Powered by eight NVIDIA Tesla P100 GPUs in each chassis, HGX-1 features an innovative switching design based on NVIDIA NVLink interconnect technology and the PCIe standard, enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardize on the HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations.”