Azure HBv2 Virtual Machines eclipse 80,000 cores for MPI HPC

Print Friendly, PDF & Email

Today Microsoft announced general availability of Azure HBv2-series Virtual Machines designed to deliver leadership-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads.

HBv2 VMs deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world high performance computing (HPC) workloads, such as CFD, explicit finite element analysis, seismic processing, reservoir modeling, rendering, and weather simulation. Azure HBv2 VMs are the first in the public cloud to feature 200 gigabit per second HDR InfiniBand from Mellanox.

Each HBv2 VM features 120 AMD EPYC 7002-series CPU cores with clock frequencies up to 3.3 GHz, 480 GB of RAM, 480 MB of L3 cache, and no simultaneous multithreading (SMT). HBv2 VMs provide up to 340 GB/sec of memory bandwidth, which is 45-50 percent more than comparable x86 alternatives and three times faster than what most HPC customers have in their datacenters today. A HBv2 virtual machine is capable of up to 4 double-precision teraFLOPS, and up to 8 single-precision teraFLOPS.

HDR InfiniBand on Azure delivers latencies as low as 1.5 microseconds, more than 200 million messages per second per VM, and advanced in-network computing engines like hardware offload of MPI collectives and adaptive routing for higher performance on the largest scaling HPC workloads. HBv2 VMs use standard Mellanox OFED drivers that support all RDMA verbs and MPI variants.

Two use cases stood out as particularly relevant to the HPC community:

  • Numerical weather prediction techniques used by scientists to understand and predict atmospheric behavior, benefit from HPC: in a simulation, Azure HBv2 VMs executed super-linear scalability up to 128 VMs (15,360 parallel processes). Improvements from scaling continued up to the largest scale of 672 VMs (80,640 parallel processes) tested in this exercise, where a 482x speedup over a single VM. At 512 nodes (VMs) a 2.2x performance increase was observed.
  • Computational fluid dynamics (CFD) represent a key opportunity for Azure HPC customers. Last year, Azure became the first public cloud to scale a CFD application to more than 10,000 parallel processes. With the launch of HBv2 VMs, Azure’s CFD capabilities are increasing again: In a simulation, Azure HBv2 executed linear efficiency to more than 15,000 parallel processes across 128 VMs. At the largest scale of 640 VMs and 57,600 parallel processes, HBv2 delivered 84% scaling efficiency.

With Azure HBv2 VMs, ultra low-latency RDMA capabilities can deliver on-demand parallel filesystems at no additional cost beyond the HBv2 VMs already provisioned for compute purposes.

The 2nd Gen AMD EPYC processors provide fantastic core scaling, access to massive memory bandwidth and are the first x86 server processors that support PCIe 4.0; all of these features enable some of the best high-performance computing experiences for the industry,” said Ram Peddibhotla, corporate vice president, Data Center Product Management, AMD. “What Azure has done for HPC in the cloud is amazing; demonstrating that HBv2 VMs and 2nd Gen EPYC processors can deliver supercomputer-class performance, MPI scalability, and cost efficiency for a variety of real-world HPC workloads, while democratizing access to HPC that will help drive the advancement of science and research.”

Sign up for our insideHPC Newsletter