Deep Learning on Summit Supercomputer Powers Insights for Nuclear Waste Remediation

A research collaboration between LBNL, PNNL, Brown University, and NVIDIA has achieved exaflop (half-precision) performance on the Summit supercomputer with a deep learning application used to model subsurface flow in the study of nuclear waste remediation. Their achievement, which will be presented during the “Deep Learning on Supercomputers” workshop at SC19, demonstrates the promise of physics-informed generative adversarial networks (GANs) for analyzing complex, large-scale science problems.

Supercomputing the Building Blocks of the Universe

In this special guest feature, ORNL profiles researcher Gaute Hagen, who uses the Summit supercomputer to model scientifically interesting atomic nuclei. “A central question he is trying to answer is: what is the size of a nucleus? The difference between the radii of neutron and proton distributions—called the “neutron skin”— has implications for the equation-of-state of neutron matter and neutron stars.”

How the Results of Summit and Sierra are Influencing Exascale

Al Geist from ORNL gave this talk at the HPC User Forum. “Two DOE national laboratories are now home to the fastest supercomputers in the world, according to the TOP500 List, a semiannual ranking of the world’s fastest computing systems. The IBM Summit system at Oak Ridge National Laboratory is currently ranked number one, while Lawrence Livermore National Laboratory’s IBM Sierra system has climbed to the number two spot.”

Applying Cloud Techniques to Address Complexity in HPC System Integrations

Arno Kolster from Providentia Worldwide gave this talk at the HPC User Forum. “OLCF and technology consulting company Providentia Worldwide recently collaborated to develop an intelligence system that combines real-time updates from the IBM AC922 Summit supercomputer with local weather and operational data from its adjacent cooling plant, with the goal of optimizing Summit’s energy efficiency. The OLCF proposed the idea and provided facility data, and Providentia developed a scalable platform to integrate and analyze the data.”

Podcast: ECP Team Achieves Huge Performance Gain on Materials Simulation Code

The Exascale Atomistics for Accuracy, Length, and Time (EXAALT) project within the US Department of Energy’s Exascale Computing Project (ECP) has made a big step forward by delivering a five-fold performance advance in addressing its fusion energy materials simulations challenge problem. “Summit is at roughly 200 petaflops, so by the time we go to the exascale, we should have another factor of five. That starts to be a transformative kind of change in our ability to do the science on these machines.”

AI Approach Points to Bright Future for Fusion Energy

Researchers are using Deep Learning techniques on DOE supercomputers to help develop fusion energy. “Unlike classical machine learning methods, FRNN—the first deep learning code applied to disruption prediction—can analyze data with many different variables such as the plasma current, temperature, and density. Using a combination of recurrent neural networks and convolutional neural networks, FRNN observes thousands of experimental runs called “shots,” both those that led to disruptions and those that did not, to determine which factors cause disruptions.”

Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer

Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date. “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster.

Summit Supercomputer Triples Performance Record on new HPL-AI Benchmark

“Using HPL-AI, a new approach to benchmarking AI supercomputers, ORNL’s Summit system has achieved unprecedented performance levels of 445 petaflops or nearly half an exaflops. That compares with the system’s official performance of 148 petaflops announced in the new TOP500 list of the world’s fastest supercomputers.”

Video: Simulating Planet-Wide Bioenergy and Human Health on the Summit Supercomputer

In this video from PASC19 in Zurich, Dan Jacobson from ORNL describes how researchers are using the #1 Summit supercomputer to tackle the challenges facing an ever-growing human population. These planet-wide simulations could not even have been attempted before the advent of pre-exascale systems like Summit.”We are using CoMet to investigate the genetic architectures underlying complex traits in applications from bioenergy to human clinical genomics.”

Achieving ExaOps with the CoMet Comparative Genomics Application

Wayne Joubert’s talk at the HPC User Forum described how researchers at the US Department of Energy’s Oak Ridge National Laboratory (ORNL) achieved a record throughput of 1.88 ExaOps on the CoMet algorithm. As the first science application to run at the exascale level, CoMet achieved this remarkable speed analyzing genomic data on the recently launched Summit supercomputer.