Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

GIGABYTE Releases Arm-Based Server for NVIDIA Baseboard Accelerators for HPC, AI

May 19, 2022 – GIGABYTE Technology, (TWSE: 2376), maker of high-performance servers and workstations, today announced a scalable server, G492-PD0, that supports dual Ampere Altra Max or Altra processors with NVIDIA HGX A100 Tensor Core GPUs for cloud infrastructure, HPC, AI and other high performance environments. “Leveraging Ampere’s Altra Max CPU with a high core count, up to 128 Armv8.2 cores per socket with Arm’s M1 core, the G492-PD0 delivers high performance efficiently and with minimized total cost of ownership,”

GIGABYTE said it developed the G492-PD0 in response to a demand for high-performing platform choices beyond x86, namely the Arm-based processor from Ampere. The G492 server was tailored to handle the performance of NVIDIA’s baseboard accelerator without compromising or throttling CPU or GPU performance, according to the company. It joins the existing line of GIGABYTE G492 servers that support the NVIDIA HGX A100 8-GPU baseboard on the AMD EPYC platform (G492-ZL2, G492-ZD2, G492-ZD0) and Intel Xeon Scalable (G492-ID0).

The G492 line of servers employs a novel cooling solution that dedicates a cooling chamber for NVIDIA accelerators and GPUs used in the networking expansion slots, ensuring the highest airflow possible to cool the high performance components. On top of the 3U GPU chamber is a 1U server that houses CPUs, memory, storage, and expansion slots. Due to the high efficiency of Ampere’s processors, the slim 1U server is able to sustain peak performance, including populating all Gen4 U.2 NVMe drive bays and two additional expansion slots on the front of the chassis without compromise.

The inclusion and choices of the NVIDIA HGX A100 platform in the new GIGABYTE server enables  NVIDIA Magnum IO,  NVIDIA’s IO architecture for multi-GPU, multi-node IO in the accelerated data center.  Magnum IO GPUDirect technologies accelerate throughput while offloading workloads from the CPU to achieve notable performance boosts. The HGX platform supports NVIDIA Magnum IO GPUDirect RDMA for direct data exchange between GPUs and third-party devices such as NICs or storage adapters. There is also support for NVIDIA Magnum IO GPUDirect Storage, for a direct data path to move data from storage to GPU memory while offloading the CPU, resulting in higher bandwidth and lower latency. For high-speed interconnects, the four NVIDIA A100 systems incorporate NVIDIA NVLink®, while the eight NVIDIA A100 systems use NVSwitch™ and NVLink to enable 600GB/s GPU peer-to-peer communication. Furthermore, NVIDIA Magnum IO with NVIDIA Collective Communications Library (NCCL) is used by almost all data analytics frameworks to optimize usage of NVLink, PCIe, and high-speed networking for multi-GPU communications to scale AI applications.

GIGABYTE will continue to pursue the most prolific lineup of servers in the market for all enterprise workloads. Creating Arm-based servers is nothing new to GIGABYTE, as it has been developing solutions for almost a decade. And future platforms will be created based on market demand, including more liquid cooling solutions.

Remote and Multiple Server Management:

As part of GIGABYTE’s value proposition, GIGABYTE provides GIGABYTE Management Console (GMC) for BMC server management via a web browser-based platform. Additionally, GIGABYTE Server Management (GSM) software is free to download and used to monitor and manage multiple servers. GMC and GSM offer great value while reducing license and customer maintenance costs.

Leave a Comment

*

Resource Links: