Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Let’s Talk Exascale: Transforming Combustion Science and Technology

In this episode of Let’s Talk Exascale, Jackie Chen from Sandia National Laboratories describes the Combustion-Pele project, which uses predictive simulation for the development of cleaner-burning engines. “Almost all practical combustors operate under extremely high turbulence levels to increase the rate of combustion providing high efficiency, but there are still outstanding challenges in understanding how turbulence affects auto-ignition.”

Radio Free HPC Does the Math on pending CORAL-2 Exascale Machines

In this podcast, the Radio Free HPC team takes a look at daunting performance targets for the DOE’s CORAL-2 RFP for Exascale Computers. “So, 1.5 million TeraFlops divided by 7.8 Teraflops per GPU is how many individual accelerators you need, and that’s 192,307. Now, multiply that by 300 watts per accelerator, and it is clear we are going to need something all-new to get where we want to go.”

HLRS and Wuhan to Collaborate on Exascale Computing

The High-Performance Computing Center Stuttgart (HLRS) and Supercomputing Center of Wuhan University have announced plans to cooperate on technology and training projects. “HLRS and the Supercomputing Center at Wuhan University plan to exchange scientists and to focus on key research topics in high-performance computing. Both sides will also share experience in installing large-scale computing systems, particularly because both Wuhan and Stuttgart aim to develop exascale systems.”

Let’s Talk Exascale: Thom Dunning on Molecular Modeling with NWCHEMEX

In this edition of Let’s Talk Exascale, Thom Dunning from the University of Washington describes the software effort underway to for molecular modeling at exascale with NWCHEMEX. “To date, our work is focused on the redesign of Northwest Chem, but we’ve also explored a number of alternate strategies for implementing the overall redesign as well as the redesign of the algorithms, and this work required access to the ECP computing allocations.”

Balancing the Load – A Million Cores in Concert

“If you’re doing any kind of parallel simulation, and you have a bit of imbalance, all the other cores have to wait for the slowest one,” Junghans says, a problem that compounds as the computing system’s size grows. “The bigger you go on scale, the more these tiny imbalances matter.” On a system like LANL’s Trinity supercomputer up to 999,999 cores could idle, waiting on a single one to complete a task.

HPC Market Update from Hyperion Research

In this video from the HPC User Forum in Tucson, Earl Joseph from Hyperion Research presents an HPC Market Update. “Hyperion Research is the new name for the former IDC high performance computing analyst team. As Hyperion Research, we continue all the worldwide activities that spawned the world’s most respected HPC industry analyst group.”

How Exascale will Move Earthquake Simulation Forward

In this video from the HPC User Forum in Tucson, David McCallen from LBNL describes how exascale computing capabilities will enhance earthquake simulation for improved structural safety. “With the major advances occurring in high performance computing, the ability to accurately simulate the complex processes associated with major earthquakes is becoming a reality. High performance simulations offer a transformational approach to earthquake hazard and risk assessments that can dramatically increase our understanding of earthquake processes and provide improved estimates of the ground motions that can be expected in future earthquakes.”

Quantum Computing at NIST

Carl Williams from NIST gave this talk at the HPC User Forum in Tucson. “Quantum information science research at NIST explores ways to employ phenomena exclusive to the quantum world to measure, encode and process information for useful purposes, from powerful data encryption to computers that could solve problems intractable with classical computers.”

Radio Free HPC Looks at the New Coral-2 RFP for Exascale Computers

In this podcast, the Radio Free HPC team looks at the new Department of Energy’s RFP for Exascale Computers. “As far as predictions go, Dan thinks one machine will go to IBM and the other will go to Intel. Rich thinks HPE will win one of the bids with an ARM-based system designed around The Machine memory-centric architecture. They have a wager, so listen in to find out where the smart money is.”

Exascale Computing for Long Term Design of Urban Systems

In this episode of Let’s Talk Exascale, Charlie Catlett from Argonne National Laboratory and the University of Chicago describes how extreme scale HPC will be required to better build Smart Cities. “Urbanization is a bigger set of challenges in the developing world than in the developed world, but it’s still a challenge for us in US and European cities and Japan.”