Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Supercomputing the Secrets of Giant Stars

In this video, supercomputing power and algorithms help astrophysicists untangle giant stars’ brightness, temperature, and chemical variations. “As a star becomes redder (and cooler), it becomes more variable. That’s a pretty firm prediction from what we’ve found, and that’s going to be what’s exciting to test in detail.”

Data Science Program at Argonne Looks to Machine Learning for New Breakthroughs

Over at Argonne, Nils Heinonen writes that four new projects for the ALCF Data Science Program that will utilize machine learning, deep learning, and other artificial intelligence methods to enable data-driven discoveries across scientific disciplines. “Each project intends to implement novel machine learning techniques; some will integrate these methods with simulations and experiments, while others will pioneer uncertainty quantification and visualization to aid in the interpretation of deep neural networks.”

ALCF – The March toward Exascale

David E. Martin gave this talk at the HPC User Forum. “In 2021, the Argonne Leadership Computing Facility (ALCF) will deploy Aurora, a new Intel-Cray system. Aurora, will be capable of over 1 exaflops. It is expected to have over 50,000 nodes and over 5 petabytes of total memory, including high bandwidth memory. The Aurora architecture will enable scientific discoveries using simulation, data and learning.”

Argonne is Supercomputing Big Data from the Large Hadron Collider

Over at Argonne, Madeleine O’Keefe writes that the Lab is supporting CERN researchers working to interpret Big Data from the Large Hadron Collider (LHC), the world’s largest particle accelerator. The LHC is expected to output 50 petabytes of data this year alone, the equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

Video: Rick Stevens from Argonne on the CANDLE Project for Exascale

In this video, Mike Bernhardt from ECP discusses the CANDLE project for Exascale with Rick Stevens from Argonne. “CANDLE is endeavoring to build the software environment for solving very large-scale distributed learning problems on the DOE Leadership Computing platforms.”

Towards Exascale Engine Simulations with NEK5000

In this video from the HPC User Forum in Detroit, Muhsin Ameen from Argonne National Laboratory presents: Towards Exascale Engine Simulations with NEK5000. “High-order methods have the potential to overcome the current limitations of standard CFD solvers. For this reason, we have been developing and improving the spectral element code NEK5000 for more than 30 years now.”

Video: Overview of Machine Learning Methods

“Machine learning enables systems to learn automatically, based on patterns in data, and make better searches, decisions, or predictions. Machine learning has become increasingly important to scientific discovery. Indeed, the U.S. Department of Energy has stated that “machine learning has the potential to transform Office of Science research best practices in an age where extreme complexity and data overwhelm human cognitive and perception ability by enabling system autonomy to self-manage, heal and find patterns and provide tools for the discovery of new scientific insights.”

There’s still Time to register for HPC User Forum in Detroit – Agenda targets AI, Autonomous Cars, & Sensor Networks

There’s still Time to register for HPC User Forum in Detroit. The meeting takes place September 4-6 in Dearborn, Michigan. Key topics include AI and other advanced analytics, automated driving systems a.k.a. self-driving vehicles, additive manufacturing and HPC in clouds (and outer space!). “If AI is in your crosshairs, you won’t want to miss the next HPC User Forum in Dearborn, Michigan. Aside from tackling leadership computing initiatives in the U.S. and around the world, the meeting will zero in on artificial intelligence use cases on prem and in the cloud, especially self-driving vehicle development and urban sensor networks.”

Evolving Scientific Computing at Argonne

Over at Argonne, John Spizzirri writes that the Lab has helped advance the boundaries of high-performance computing technologies through the Argonne Leadership Computing Facility (ALCF). “Realizing the promise of exascale computing, the ALCF is developing the framework by which to harness this immense computing power to an advanced combination of simulation, data analysis, and machine learning. This effort will undoubtedly reframe the way science is conducted, and do so on a global scale.”

DOE Awards 1.5 billion Hours of Computing Time at Argonne

The ASCR Leadership Computing Challenge has awarded 20 projects for a total of 1.5 billion core-hours at Argonne to pursue challenging, high-risk, high-payoff simulations. “The Advanced Scientific Computing Program (ASCR), which manages some of the world’s most powerful supercomputing facilities, selects projects every year in areas directly related to the DOE mission for broadening the community of researchers capable of using leadership computing resources, and serving national interests for the advancement of scientific discovery, technological innovation, and economic competitiveness.”