Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Intel and Cray to Build First USA Exascale Supercomputer for DOE in 2021

Today Intel announced plans to deliver the first exaflop supercomputer in the United States. The Aurora supercomputer will be used to dramatically advance scientific research and discovery. The contract is valued at more than $500 million and will be delivered to Argonne National Laboratory by Intel and sub-contractor Cray in 2021. “Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO.”

Argonne Looks to Singularity for HPC Code Portability

Over at Argonne, Nils Heinonen writes that Researchers are using the open source Singularity framework as a kind of Rosetta Stone for running supercomputing code almost anywhere. “Once a containerized workflow is defined, its image can be snapshotted, archived, and preserved for future use. The snapshot itself represents a boon for scientific provenance by detailing the exact conditions under which given data were generated: in theory, by providing the machine, the software stack, and the parameters, one’s work can be completely reproduced.”

Machine Learning Award Powers Engine Design at Argonne

Over at Argonne, Jared Sagoff writes that automotive manufacturers are leveraging the power of DOE supercomputers to simulate the combustion engines of the future. “As part of a partnership between the Argonne, Convergent Science, and Parallel Works, engine modelers are beginning to use machine learning algorithms and artificial intelligence to optimize their simulations. Now, this alliance recently received a Technology Commercialization Fund award from the DOE to complete this important project.”

Video: Supercomputing the Secrets of Giant Stars

In this video, supercomputing power and algorithms help astrophysicists untangle giant stars’ brightness, temperature, and chemical variations. “As a star becomes redder (and cooler), it becomes more variable. That’s a pretty firm prediction from what we’ve found, and that’s going to be what’s exciting to test in detail.”

Data Science Program at Argonne Looks to Machine Learning for New Breakthroughs

Over at Argonne, Nils Heinonen writes that four new projects for the ALCF Data Science Program that will utilize machine learning, deep learning, and other artificial intelligence methods to enable data-driven discoveries across scientific disciplines. “Each project intends to implement novel machine learning techniques; some will integrate these methods with simulations and experiments, while others will pioneer uncertainty quantification and visualization to aid in the interpretation of deep neural networks.”

ALCF – The March toward Exascale

David E. Martin gave this talk at the HPC User Forum. “In 2021, the Argonne Leadership Computing Facility (ALCF) will deploy Aurora, a new Intel-Cray system. Aurora, will be capable of over 1 exaflops. It is expected to have over 50,000 nodes and over 5 petabytes of total memory, including high bandwidth memory. The Aurora architecture will enable scientific discoveries using simulation, data and learning.”

Argonne is Supercomputing Big Data from the Large Hadron Collider

Over at Argonne, Madeleine O’Keefe writes that the Lab is supporting CERN researchers working to interpret Big Data from the Large Hadron Collider (LHC), the world’s largest particle accelerator. The LHC is expected to output 50 petabytes of data this year alone, the equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

Video: Rick Stevens from Argonne on the CANDLE Project for Exascale

In this video, Mike Bernhardt from ECP discusses the CANDLE project for Exascale with Rick Stevens from Argonne. “CANDLE is endeavoring to build the software environment for solving very large-scale distributed learning problems on the DOE Leadership Computing platforms.”

Towards Exascale Engine Simulations with NEK5000

In this video from the HPC User Forum in Detroit, Muhsin Ameen from Argonne National Laboratory presents: Towards Exascale Engine Simulations with NEK5000. “High-order methods have the potential to overcome the current limitations of standard CFD solvers. For this reason, we have been developing and improving the spectral element code NEK5000 for more than 30 years now.”

Video: Overview of Machine Learning Methods

“Machine learning enables systems to learn automatically, based on patterns in data, and make better searches, decisions, or predictions. Machine learning has become increasingly important to scientific discovery. Indeed, the U.S. Department of Energy has stated that “machine learning has the potential to transform Office of Science research best practices in an age where extreme complexity and data overwhelm human cognitive and perception ability by enabling system autonomy to self-manage, heal and find patterns and provide tools for the discovery of new scientific insights.”