Nvidia Announces DGX SuperPODs for AI, Available in 20-Node Increments

Print Friendly, PDF & Email

Nvidia today announced at its GPU Technology Conference (GTC) the Nvidia DGX SuperPOD Solution for Enterprise, the world’s first turnkey AI infrastructure, making it possible for organizations to install incredibly powerful AI supercomputers with extraordinary speed — in many cases in just a few weeks’ time.

Available in cluster sizes ranging from 20 to 140 individual NvidiaVIDIA DGX A100 systems, DGX SuperPODs are now shipping and expected to be installed in Korea, the U.K., Sweden and India before the end of the year.

Sold in 20-unit modules interconnected with Nvidia Mellanox® HDR InfiniBand networking, DGX SuperPOD systems start at 100 petaflops of AI performance and can scale up to 700 petaflops to run the most complex AI workloads.

“Traditional supercomputers can take years to plan and deploy, but the turnkey Nvidia DGX SuperPOD Solution for Enterprise helps customers begin their AI transformation today,” said Charlie Boyle, vice president and general manager of DGX systems at Nvidia. “State-of-the-art conversational AI, recommender systems and computer vision workloads rapidly exceed the capabilities of traditional infrastructure, and our new solution gives customers a fast track to the world’s most advanced, scalable AI infrastructure and Nvidia expertise.”

Visionary organizations are creating AI centers of excellence with the DGX SuperPOD Solution for Enterprise. Those unveiling new DGX SuperPOD AI supercomputers today include:

  • NAVER, the leading search engine in Korea, has created with LINE, Japan’s No. 1 messaging service, the AI technology brand NAVER CLOVA. NAVER CLOVA is using its DGX SuperPOD built with 140 DGX A100 systems to scale out research and development of natural language processing models and conversational AI services on its AI platform with the Nvidia TensorRT SDK for high-performance deep learning inference.
  • Linköping University, in Sweden, is building BerzeLiUs, a DGX SuperPOD of 60 DGX A100 systems. BerzeLiUs will be a powerful resource to advance AI research and boost collaboration between academia and Swedish industry across research programs financed by the Knut and Alice Wallenberg Foundation, such as the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program and initiatives in the life sciences and quantum technology.
  • C-DAC, the Centre for Development of Advanced Computing operating under the Ministry of Electronics and Information Technology in India, is commissioning India’s fastest and largest HPC-AI supercomputer, called PARAM Siddhi – AI. Built with 42 DGX A100 systems, the supercomputer will help address nationwide and global challenges in healthcare, education, energy, cybersecurity, space, automotive and agriculture through research partnerships and collaboration across academia, industry and startups.

Additionally, Nvidia separately announced today plans to build Cambridge-1, an 80-node DGX SuperPOD with 400 petaflops of AI performance. Once deployed by the end of the year, it will be the fastest supercomputer in the U.K. The system will be used for collaborative research within the U.K. AI and healthcare community across academia, industry and startups.

Cambridge-1 will help accelerate diverse healthcare workloads, including drug development with the Nvidia Clara healthcare application framework. It will also enable researchers to rapidly analyze volumes of medical information using natural language processing with the specialized Nvidia BioMegatron model available on the NVIDIA NGC software hub.

The DGX SuperPOD Solution for Enterprise was developed through years of research and development in creating the world’s most advanced AI system to power Nvidia’s own engineering in automotive, healthcare, conversational AI, recommender systems, data science and computer graphics.

Nvidia Selene, a 280-node DGX SuperPOD, set the bar high for AI with top marks on both TOP500 and MLPerf results published earlier this year. Its DGX SuperPOD architecture also delivers breakthrough efficiency with record-setting Green500 performance of 20 gigaflops/watt.

AI infrastructure requires extremely high-speed storage to handle a variety of data types in parallel, such as text, tabular data, audio and video. The Nvidia DGX SuperPOD Solution for Enterprise features all-flash storage that is optimized to meet customers’ specific requirements as well as the unique demands of AI workloads. DDN is the first Nvidia-qualified storage partner for the DGX SuperPOD Solution for Enterprise.

From customized capacity planning and data center design services to application performance testing and developer operations training, the DGX SuperPOD Solution for Enterprise provides the fastest path to AI innovation at scale. Each DGX SuperPOD is fully racked, stacked and configured by Nvidia-Certified partners. These Nvidia AI experts ensure installs are easy, even when building out AI infrastructure with dozens or hundreds of nodes connected by extensive cabling.

Following installation, Nvidia and certified experts work with customers to ensure their AI workloads are optimized with the latest Nvidia software available on the NGC hub of cloud-native, GPU-optimized containers, models and industry-specific SDKs.

The DGX SuperPOD Solution for Enterprise is available from select Nvidia partners worldwide. Learn more at www.nvidia.com/dgxsuperpod.

In addition to the new DGX SuperPOD Solution for Enterprise, the DGX SuperPOD blueprint is available to serve as an industry guide for Nvidia-Certified partners to plan and deploy their own DGX SuperPOD offerings, complete with services and certified support for NGC software.

source: Nvidia