NVIDIA Powers Top 13 Most Energy Efficient Supercomputers

Print Friendly, PDF & Email

Today Nvidia announced that the NVIDIA Tesla AI supercomputing platform powers the top 13 measured systems on the new Green500 list of the world’s most energy-efficient high performance computing systems. All 13 use NVIDIA Tesla P100 data center GPU accelerators, including four systems based on the NVIDIA DGX-1 AI supercomputer.

NVIDIA also released performance data illustrating that NVIDIA Tesla GPUs have improved performance for HPC applications by 3X over the Kepler architecture released two years ago. This significantly boosts performance beyond what would have been predicted by Moore’s Law, even before it began slowing in recent years.

Additionally, NVIDIA announced that its Tesla V100 GPU accelerators — which combine AI and traditional HPC applications on a single platform — are projected to provide the U.S. Department of Energy’s Summit supercomputer with 200 petaflops of 64-bit floating point performance and over 3 exaflops of AI performance when it comes online later this year.

NVIDIA GPUs Fueling World’s Greenest Supercomputers

The Green500 list, released today at the International Supercomputing Show in Frankfurt, is topped by the new TSUBAME 3.0 system, at the Tokyo Institute of Technology, powered by NVIDIA Tesla P100 GPUs. It hit a record 14.1 gigaflops per watt — 50 percent higher efficiency than the previous top system — NVIDIA’s own SATURNV, which ranks No. 10 on the latest list.

Spots two through six on the new list are clusters housed at Yahoo Japan, Japan’s National Institute of Advanced Industrial Science and Technology, Japan’s Center for Advanced Intelligence Project (RIKEN), the University of Cambridge and the Swiss National Computing Center (CSCS), home to the newly crowned fastest supercomputer in Europe, Piz Daint. Other key systems in the top 13 measured systems powered by NVIDIA include E4 Computer Engineering, University of Oxford, and the University of Tokyo.

Systems built on NVIDIA’s DGX-1 AI supercomputer — which combines NVIDIA Tesla GPU accelerators with a fully optimized AI software package — include RAIDEN at RIKEN, JADE at the University of Oxford, a hybrid cluster at a major social media and technology services company and NVIDIA’s own SATURNV.

“Researchers taking on the world’s greatest challenges are seeking a powerful, unified computing architecture to take advantage of HPC and the latest advances in AI,” said Ian Buck, general manager of Accelerated Computing at NVIDIA. “Our AI supercomputing platform provides one architecture for computational and data science, providing the most brilliant minds a combination of capabilities to accelerate the rate of innovation and solve the unsolvable.”

“With TSUBAME 3.0 supercomputer our goal was to deliver a single powerful platform for both HPC and AI with optimal energy efficiency as one of the flagship Japanese national supercomputers,” said Professor Satoshi Matsuoka of the Tokyo Institute of Technology. “The most important point is that we achieved this result with a top-tier production machine of multi-petascale. NVIDIA Tesla P100 GPUs allowed us to excel at both these objectives so we can provide this revolutionary AI supercomputing platform to accelerate our scientific research and education of the country.”

Volta: Leading the Path to Exascale

NVIDIA revealed progress toward achieving exascale levels of performance, with anticipated leaps in speed, efficiency and AI computing capability for the Summit supercomputer, scheduled for delivery later this year to the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at Oak Ridge National Laboratory.

Featuring Tesla V100 GPU accelerators, Summit is projected to deliver 200 petaflops of performance — compared with 93 petaflops for the world’s current fastest system, China’s TaihuLight. Additionally, Summit is expected to have strong AI computing capabilities, achieving more than 3 exaflops of half-precision Tensor Operations.

AI is extending HPC and together they are accelerating the pace of innovation to help solve some of the world’s most important challenges,” said Jeff Nichols, associate laboratory director of the Computing and Computational Science Directorate at Oak Ridge National Laboratory. “Oak Ridge’s pre-exascale supercomputer, Summit, is powered by NVIDIA Volta GPUs that provide a single unified architecture that excels at both AI and HPC. We believe AI supercomputing will unleash breakthrough results for researchers and scientists.”

Volta: Ultimate Architecture for AI Supercomputing

To extend the reach of Volta, NVIDIA also announced it is making new Tesla V100 GPU accelerators available in a PCIe form factor for standard servers. With PCIe systems, as well as previously announced systems using NVIDIA NVLink™ interconnect technology, coming to market, Volta promises to revolutionize HPC and bring groundbreaking AI technology to supercomputers, enterprises and clouds.

Specifications of the PCIe form factor include:

  • 7 teraflops double-precision performance, 14 teraflops single-precision performance and 112 teraflops half-precision performance with NVIDIA GPU BOOST technology
  • 16GB of CoWoS HBM2 stacked memory, delivering 900GB/sec of memory bandwidth
  • Support for PCIe Gen 3 interconnect (up to 32GB/sec bi-directional bandwidth)
  • 250 watts of power

NVIDIA Tesla V100 GPU accelerators for PCIe-based systems are expected to be available later this year from NVIDIA reseller partner and manufacturers, including Hewlett Packard Enterprise (HPE).

HPE is excited to complement our purpose-built HPE Apollo systems innovation for deep learning and AI with the unique, industry-leading strengths of the NVIDIA Tesla V100 technology architecture to accelerate insights and intelligence for our customers,” said Bill Mannel, vice president and general manager of HPC and AI at Hewlett Packard Enterprise. “HPE will support NVIDIA Volta with PCIe interconnects in three different systems in our portfolio and provide early access to NVLink 2.0 systems to address emerging customer demand.”

Sign up for our insideHPC Newsletter