Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Applying Cloud Techniques to Address Complexity in HPC System Integrations

Arno Kolster from Providentia Worldwide gave this talk at the HPC User Forum. “OLCF and technology consulting company Providentia Worldwide recently collaborated to develop an intelligence system that combines real-time updates from the IBM AC922 Summit supercomputer with local weather and operational data from its adjacent cooling plant, with the goal of optimizing Summit’s energy efficiency. The OLCF proposed the idea and provided facility data, and Providentia developed a scalable platform to integrate and analyze the data.”

Podcast: ECP Team Achieves Huge Performance Gain on Materials Simulation Code

The Exascale Atomistics for Accuracy, Length, and Time (EXAALT) project within the US Department of Energy’s Exascale Computing Project (ECP) has made a big step forward by delivering a five-fold performance advance in addressing its fusion energy materials simulations challenge problem. “Summit is at roughly 200 petaflops, so by the time we go to the exascale, we should have another factor of five. That starts to be a transformative kind of change in our ability to do the science on these machines.”

AI Approach Points to Bright Future for Fusion Energy

Researchers are using Deep Learning techniques on DOE supercomputers to help develop fusion energy. “Unlike classical machine learning methods, FRNN—the first deep learning code applied to disruption prediction—can analyze data with many different variables such as the plasma current, temperature, and density. Using a combination of recurrent neural networks and convolutional neural networks, FRNN observes thousands of experimental runs called “shots,” both those that led to disruptions and those that did not, to determine which factors cause disruptions.”

Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer

Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date. “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster.

Summit Supercomputer Triples Performance Record on new HPL-AI Benchmark

“Using HPL-AI, a new approach to benchmarking AI supercomputers, ORNL’s Summit system has achieved unprecedented performance levels of 445 petaflops or nearly half an exaflops. That compares with the system’s official performance of 148 petaflops announced in the new TOP500 list of the world’s fastest supercomputers.”

Video: Simulating Planet-Wide Bioenergy and Human Health on the Summit Supercomputer

In this video from PASC19 in Zurich, Dan Jacobson from ORNL describes how researchers are using the #1 Summit supercomputer to tackle the challenges facing an ever-growing human population. These planet-wide simulations could not even have been attempted before the advent of pre-exascale systems like Summit.”We are using CoMet to investigate the genetic architectures underlying complex traits in applications from bioenergy to human clinical genomics.”

Achieving ExaOps with the CoMet Comparative Genomics Application

Wayne Joubert’s talk at the HPC User Forum described how researchers at the US Department of Energy’s Oak Ridge National Laboratory (ORNL) achieved a record throughput of 1.88 ExaOps on the CoMet algorithm. As the first science application to run at the exascale level, CoMet achieved this remarkable speed analyzing genomic data on the recently launched Summit supercomputer.

Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer

Jack Wells from ORNL gave this talk at the GPU Technology Conference. “HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We’ll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows.”

Interview: Why HPC is the Right Tool for Physics

Over at the SC19 Blog, Charity Plata continues the HPC is Now series of interviews with Enrico Rinaldi, a physicist and special postdoctoral fellow with the Riken BNL Research Center. This month, Rinaldi discusses why HPC is the right tool for physics and shares the best formula for garnering a Gordon Bell Award nomination. “Sierra and Summit are incredible machines, and we were lucky to be among the first teams to use them to produce new scientific results. The impact on my lattice QCD research was tremendous, as demonstrated by the Gordon Bell paper submission.”

Looking Back at SC18 and the Road Ahead to Exascale

In this special guest feature from Scientific Computing World, Robert Roe reports on new technology and 30 years of the US supercomputing conference at SC18 in Dallas. “From our volunteers to our exhibitors to our students and attendees – SC18 was inspirational,” said SC18 general chair Ralph McEldowney. “Whether it was in technical sessions or on the exhibit floor, SC18 inspired people with the best in research, technology, and information sharing.”