NVIDIA Reveals Eos Supercomputer: 4,600 H100 GPUs for 18 AI Exaflops

Print Friendly, PDF & Email

NVIDIA Thursday released a video that offers the first public look at Eos (pictured here), a monster 18.4 exaflops FP8 AI supercomputer powered by 576 DGX H100 systems.

The company calls Eos as an “extremely large-scale NVIDIA DGX SuperPOD” with Quantum-2 InfiniBand networking and software.

Announced in November at the SC23 conference in Denver, Eos — named for the Greek goddess who opens the gates of dawn — has a total of 4,608 H100 GPUs and an architecture optimized for AI workloads that need “ultra-low-latency and high-throughput interconnectivity across a large cluster of accelerated computing nodes,” according to the company.

NVIDIA said Eos would be ranked no. 9 on the TOP500 list of the world’s fastest supercomputers, according to the latest list released last November.

Based on NVIDIA Quantum-2 InfiniBand with In-Network Computing technology, its network architecture supports data transfer speeds of up to 400Gb/s, facilitating the rapid movement of large datasets essential for training complex AI models, NVIDIA said.

It includes software offerings such as NVIDIA Base Command and AI Enterprise.

“People are changing the world with generative AI, from drug discovery to chatbots to autonomous machines and beyond,” NVIDIA said in a blog released today. “To achieve these breakthroughs, they need more than AI expertise and development skills. They need an AI factory — a purpose-built AI engine that’s always available and can help ramp their capacity to build AI models at scale.”