Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


AI Approach Points to Bright Future for Fusion Energy

Researchers are using Deep Learning techniques on DOE supercomputers to help develop fusion energy. “Unlike classical machine learning methods, FRNN—the first deep learning code applied to disruption prediction—can analyze data with many different variables such as the plasma current, temperature, and density. Using a combination of recurrent neural networks and convolutional neural networks, FRNN observes thousands of experimental runs called “shots,” both those that led to disruptions and those that did not, to determine which factors cause disruptions.”

Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer

Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date. “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster.

Summit Supercomputer Triples Performance Record on new HPL-AI Benchmark

“Using HPL-AI, a new approach to benchmarking AI supercomputers, ORNL’s Summit system has achieved unprecedented performance levels of 445 petaflops or nearly half an exaflops. That compares with the system’s official performance of 148 petaflops announced in the new TOP500 list of the world’s fastest supercomputers.”

Video: Simulating Planet-Wide Bioenergy and Human Health on the Summit Supercomputer

In this video from PASC19 in Zurich, Dan Jacobson from ORNL describes how researchers are using the #1 Summit supercomputer to tackle the challenges facing an ever-growing human population. These planet-wide simulations could not even have been attempted before the advent of pre-exascale systems like Summit.”We are using CoMet to investigate the genetic architectures underlying complex traits in applications from bioenergy to human clinical genomics.”

Achieving ExaOps with the CoMet Comparative Genomics Application

Wayne Joubert’s talk at the HPC User Forum described how researchers at the US Department of Energy’s Oak Ridge National Laboratory (ORNL) achieved a record throughput of 1.88 ExaOps on the CoMet algorithm. As the first science application to run at the exascale level, CoMet achieved this remarkable speed analyzing genomic data on the recently launched Summit supercomputer.

Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer

Jack Wells from ORNL gave this talk at the GPU Technology Conference. “HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We’ll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows.”

Interview: Why HPC is the Right Tool for Physics

Over at the SC19 Blog, Charity Plata continues the HPC is Now series of interviews with Enrico Rinaldi, a physicist and special postdoctoral fellow with the Riken BNL Research Center. This month, Rinaldi discusses why HPC is the right tool for physics and shares the best formula for garnering a Gordon Bell Award nomination. “Sierra and Summit are incredible machines, and we were lucky to be among the first teams to use them to produce new scientific results. The impact on my lattice QCD research was tremendous, as demonstrated by the Gordon Bell paper submission.”

Looking Back at SC18 and the Road Ahead to Exascale

In this special guest feature from Scientific Computing World, Robert Roe reports on new technology and 30 years of the US supercomputing conference at SC18 in Dallas. “From our volunteers to our exhibitors to our students and attendees – SC18 was inspirational,” said SC18 general chair Ralph McEldowney. “Whether it was in technical sessions or on the exhibit floor, SC18 inspired people with the best in research, technology, and information sharing.”

New TOP500 List topped by DOE Supercomputers

The latest TOP500 list of the world’s fastest supercomputers is out, a remarkable ranking that shows five Department of Energy supercomputers in the top 10, with the first two captured by Summit at Oak Ridge and Sierra at Livermore. With the number one and number two systems on the planet, the “Rebel Alliance” vendors of IBM, Mellanox, and NVIDIA stand far and tall above the others.

Summit Supercomputer Breaks Exaop Barrier on Neural Net Trained to Recognize Extreme Weather Patterns

“Using a climate dataset from Berkeley Lab on the Summit supercomputer at Oak Ridge, they trained a deep neural network to identify extreme weather patterns from high-resolution climate simulations. By tapping into the specialized NVIDIA Tensor Cores built into the GPUs at scale, the researchers achieved a peak performance of 1.13 exaops and a sustained performance of 0.999 – the fastest deep learning algorithm reported to date and an achievement that earned them a spot on this year’s list of finalists for the Gordon Bell Prize.”