Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

World’s Fastest Supercomputers Look Familiar on November TOP500 List

Today marked the release of the 54th edition of the TOP500 list of the world’s fastest supercomputers. In summary, the top of the list remains largely unchanged. In fact, the top 10 systems are unchanged from the previous list. “The latest TOP500 list saw China and the US maintaining their dominance of the list, albeit in different categories. Meanwhile, the aggregate performance of the 500 systems, based on the High Performance Linpack (HPL) benchmark, continues to rise and now sits at 1.66 exaflops. The entry level to the list has risen to 1.14 petaflops, up from 1.02 petaflops in the previous list in June 2019.”

Tackling Turbulence on the Summit Supercomputer

Researchers at the Georgia Institute of Technology have achieved world record performance on the Summit supercomputer using a new algorithm for turbulence simulation. “The team identified the most time-intensive parts of a base CPU code and set out to design a new algorithm that would reduce the cost of these operations, push the limits of the largest problem size possible, and take advantage of the unique data-centric characteristics of Summit, the world’s most powerful and smartest supercomputer for open science.”

Deep Learning on Summit Supercomputer Powers Insights for Nuclear Waste Remediation

A research collaboration between LBNL, PNNL, Brown University, and NVIDIA has achieved exaflop (half-precision) performance on the Summit supercomputer with a deep learning application used to model subsurface flow in the study of nuclear waste remediation. Their achievement, which will be presented during the “Deep Learning on Supercomputers” workshop at SC19, demonstrates the promise of physics-informed generative adversarial networks (GANs) for analyzing complex, large-scale science problems.

Supercomputing the Building Blocks of the Universe

In this special guest feature, ORNL profiles researcher Gaute Hagen, who uses the Summit supercomputer to model scientifically interesting atomic nuclei. “A central question he is trying to answer is: what is the size of a nucleus? The difference between the radii of neutron and proton distributions—called the “neutron skin”— has implications for the equation-of-state of neutron matter and neutron stars.”

How the Results of Summit and Sierra are Influencing Exascale

Al Geist from ORNL gave this talk at the HPC User Forum. “Two DOE national laboratories are now home to the fastest supercomputers in the world, according to the TOP500 List, a semiannual ranking of the world’s fastest computing systems. The IBM Summit system at Oak Ridge National Laboratory is currently ranked number one, while Lawrence Livermore National Laboratory’s IBM Sierra system has climbed to the number two spot.”

Applying Cloud Techniques to Address Complexity in HPC System Integrations

Arno Kolster from Providentia Worldwide gave this talk at the HPC User Forum. “OLCF and technology consulting company Providentia Worldwide recently collaborated to develop an intelligence system that combines real-time updates from the IBM AC922 Summit supercomputer with local weather and operational data from its adjacent cooling plant, with the goal of optimizing Summit’s energy efficiency. The OLCF proposed the idea and provided facility data, and Providentia developed a scalable platform to integrate and analyze the data.”

Podcast: ECP Team Achieves Huge Performance Gain on Materials Simulation Code

The Exascale Atomistics for Accuracy, Length, and Time (EXAALT) project within the US Department of Energy’s Exascale Computing Project (ECP) has made a big step forward by delivering a five-fold performance advance in addressing its fusion energy materials simulations challenge problem. “Summit is at roughly 200 petaflops, so by the time we go to the exascale, we should have another factor of five. That starts to be a transformative kind of change in our ability to do the science on these machines.”

AI Approach Points to Bright Future for Fusion Energy

Researchers are using Deep Learning techniques on DOE supercomputers to help develop fusion energy. “Unlike classical machine learning methods, FRNN—the first deep learning code applied to disruption prediction—can analyze data with many different variables such as the plasma current, temperature, and density. Using a combination of recurrent neural networks and convolutional neural networks, FRNN observes thousands of experimental runs called “shots,” both those that led to disruptions and those that did not, to determine which factors cause disruptions.”

Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer

Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date. “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster.

Summit Supercomputer Triples Performance Record on new HPL-AI Benchmark

“Using HPL-AI, a new approach to benchmarking AI supercomputers, ORNL’s Summit system has achieved unprecedented performance levels of 445 petaflops or nearly half an exaflops. That compares with the system’s official performance of 148 petaflops announced in the new TOP500 list of the world’s fastest supercomputers.”