TOP500 at SC24: El Cap on Top, U.S. Now Has 3 Exascale Supercomputers

Here in Atlanta at SC24, where an anticipated 16,000 attendees are expected to set a conference attendance record, the new TOP500 list of the world’s most powerful supercomputers reports  that El Capitan, the HPE-Cray/AMD supercomputer at Lawrence Livermore National Lab, has taken the top spot and is the third system to exceed the exascale (a billion billion calculationss per second) milestone with a high-performance LINPACK (HPL) benchmark of just under 1.75 exaflops.

The Frontier and Aurora exascale systems, housed at Oak Ridge and Argonne national labs, respectively, moved down to nos. 2 and No. 3 spots.

“El Cap” has 11,039,616 combined CPU and GPU cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8GHz and AMD Instinct MI300A accelerators. The system uses the Cray Slingshot 11 fabric and has an energy efficiency of 58.89 Gigaflops/watt, placing it at no. 18 on the Green500 list. AMD said the system is powered by AMD Instinct MI300A APUs, which puts CPU and GPU cores and stacked memory into a single package.

The Frontier system now the No. 2 system, increased its HPL score from 1.206 Eflop/s to 1.353 Eflop/s on the new list. Frontier has also increased its total core count, from 8,699,904 cores on the last list to 9,066,176 cores on this list.

The Aurora system kept its HPL benchmark score from the last list, achieving 1.012 Exaflop/s. Aurora is built by Intel based on the HPE Cray EX – Intel Exascale Compute blade, which uses Intel Xeon CPU Max Series Processors and Intel Data Center GPU Max Series accelerators.

The top three systems were built under the auspices of the U.S. Department of Energy’s Exascale Computing Project, a nearly $2 billion program that began in 2016. The U.S. now has the only three exascale-class systems in the world, as verified by the TOP500, though it is generally believed that China also has several exascale supercomputers. The PRC stopped participating in the TOP500 benchmark seven years ago.

The early response from industry observers is enthusiastic.

“The installation of three exascale computers in the U.S. is an amazing achievement,” said Earl Joseph, CEO of industry analyst firm Hyperion Research, “given all the struggles over the last four to five years, with Covid, supply chain issues and rising costs. I’m looking forward to all of the new scientific results that will come from applying these systems to critical national research. The Livermore system will bring us closer to fusion energy and a better understanding of nuclear reactions as well as better understanding of how things work at a quantum level.”

“The El Capitan supercomputer is a tremendous achievement that required the best of a years-long public-private partnership to fulfill,” said Addison Snell, CEO of analyst firm Intersect360 Research. “While the world is captivated by the use of AI for writing poetry and adding backgrounds to photos, the DOE is focused on the apex of science. And with science, the primary emphasis is on modeling and simulation, for which high-precision, deterministic computing is still paramount. Here El Capitan truly shines, bringing new levels of capability to the DOE mission. LLNL, DOE, HPE, and AMD should take a well-deserved bow.”

At no. 4 on the TOP500 is the Eagle system installed on the Microsoft Azure Cloud. It remains the highest-ranked cloud-based system on the list with an HPL score of 561.2 PFlop/s

The only other new system in the TOP 5 is the HPC6 system at No. 5. This machine is installed at Eni S.p.A center in Ferrera Erbognone, Italy and has the same architecture as Frontier. HPC6 achieved an HPL benchmark of 477.90 PFlop/s and is now the fastest system in Europe.

Here is a summary of the rest of the top 10:

  • Fugaku, the No. 6 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Petaflop/s. It remains the fastest system on the HPCG benchmark with 16 Teraflop/s.
  • After a recent upgrade the Alps system installed at the Swiss National Supercomputing Centre (CSCS) in Switzerland is now at No. 7. It is an HPE Cray EX254n system with NVIDIA Grace 72C and NVIDIA GH200 Superchip and a Slingshot-11 interconnect. After its upgrade it achieved 434.9 Petaflop/s.
  • The LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is at the No. 8 with a performance of 380 Petaflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC’s data center in Kajaani, Finland.
  • The No. 9 system Leonardo is installed at another EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a HPL performance of 241.2 Petaflop/s.
  • Rounding out the TOP10 is the new Tuolumne system which is also installed at the Lawrence Livermore National Laboratory, California, USA. It is a sister system to El Capitan with the same architecture. It achieved 208.1 Petaflop/s on its own.

Other TOP500 Highlights

The new list found AMD and Intel processors to be the preferred option for systems in the Top 10 (Nvidia chips are used in eight of the next 10 systems). Five systems use AMD processors (El Capitan, Frontier, HPC6, LUMI, and Tuolumne) while three systems use Intel (Aurora, Eagle, Leonardo). Alps relies on an NVIDIA processor while Fugaku has a proprietary ARM-based Fujitsu A65FX 48c 2.2GHz.

Seven of the computers on the Top 10 use the Slingshot-11 interconnect (El Capitan, Frontier, Aurora, HPC6, Alps, LUMI, and Tuolumne) while two others use Infiniband (Eagle and Leonardo). Fugaku has its own proprietary Tofu interconnect.

While China and the United States were once again the countries that earned the most entries on the entire TOP500 list, China is not participating to the extent that it did. The U.S. added two systems to the list, bringing its total number of systems to 173. China once again dropped its number of representative machines on the list from 80 to 63 systems. Germany is catching up to China, with 41 machines on the list.

In terms of continents, the upset on the previous list that saw Europe overtake Asia remains the same here. North America had 181 machines, Europe had 161 machines, and Asia had 143 machines on the list.

This edition of the GREEN500 saw some big changes from new systems in the Top 3 list, outside of the No. 1 spot.

The No. 1 spot was once again claimed by JEDI – JUPITER Exascale Development Instrument, a system from EuroHPC/FZJ in Germany. Taking the No. 224 spot on the TOP500, JEDI was able to repeat its energy efficiency rating from the last list at 72.73 GFlops/Watt while producing an HPL score of 4.5 PFlop/s. JEDI is a BullSequana XH3000 machine with a Grace Hopper Superchip 72c 2GHz, an NVIDIA GH200 Superchip, a Quad-Rail NVIDIA InfiniBand NDR200, and has 19,584 total cores.

The No. 2 spot on this edition’s GREEN500 was claimed by the new ROMEO-2025 system at the ROMEO HPC Center in Champagne-Ardenne, France. This system premiered with an energy efficiency rating of 70.91 GFlops/Watt and has an HPL benchmark of 9.863 PFlop/s. Although this is a new system, its architecture is identical to JEDI but is twice as large. Thus, its energy efficiency is slightly lower.

The No. 3 spot was claimed by the new Adastra 2 system at the Grand Equipement National de Calcul Intensif – Centre Informatique National de l’Enseignement Suprieur (GENCI-CINES) in France. Adastra 2’s first appearance on this TOP500 list showed an energy efficiency score of 69.10 GFlops/Watt and an HPL score of 2.529 PFLop/s. This machine is a HPE Cray EX255a system with AMD 4th Gen EPYC 24 core 1.8GHz processors, AMD Instinct MI300A accelerators, it has 16,128 cores total, and a Slingshot-11 running RHEL.

El Capitan at Yosemite National Park in California

The TOP500 organizations said El Capitan, named for a vertical rock formation in California, and Frontier deserve honorable mentions. Considering its top-scoring HPL benchmark of 1.742 EFlop/s, it is quite impressive that the machine was also able to snag the No. 18 spot on the GREEN500 with an energy efficiency score of 58.89 Gigaflops/watt. Frontier – the winner on the previous TOP500 list and No. 2 on this list– produced an impressive energy efficiency score of 54.98 Gigaflops/watt for this GREEN500 list. Both of these systems demonstrate that it is possible to achieve immense computational power while also prioritizing energy efficiency.

The TOP500 list has incorporated the High-Performance Conjugate Gradient (HPCG) benchmark results, which provide an alternative metric for assessing supercomputer performance. This score is meant to complement the HPL measurement to give a fuller understanding of the machine.

  • Supercomputer Fugaku remains the leader on the HPCG benchmark with 16 PFlop/s. It held the top position since June 2020.
  • The DOE system Frontier at ORNL remains in the second position with 14.05 HPCG-Pflop/s.
  • The third position was again captured by the Aurora system with 5.6 HPCG-petaflops.
  • There are no HPCG submissions for El Capitan yet.

The HPL-MxP benchmark seeks to highlight the use of mixed precision computations. Traditional HPC uses 64-bit floating point computations. Today, we see hardware with various levels of floating-point precisions – 32-bit, 16-bit, and even 8-bit. The HPL-MxP benchmark demonstrates that by using mixed precision during computation, much higher performance is possible. By using mathematical techniques, the same accuracy can be computed with a mixed-precision technique when compared with straight 64-bit precision.

This year’s winner of the HPL-MxP category is the Aurora system with 11.6 EFlop/s. The second spot goes to Frontier with a score of 11.4, and the No. 3 spot goes to LUMI with 2.35 EFlop/s.