Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Doug Kothe Looks Ahead at The Exascale Computing Project

In this video, Doug Kothe from ORNl provides an update on the Exascale Computing Project. “With respect to progress, marrying high-risk exploratory and high-return R&D with formal project management is a formidable challenge. In January, through what is called DOE’s Independent Project Review, or IPR, process, we learned that we can indeed meet that challenge in a way that allows us to drive hard with a sense of urgency and still deliver on the essential products and solutions. In short, we passed the review with flying colors—and what’s especially encouraging is that the feedback we received tells us what we can do to improve.”

HPE: Design Challenges at Scale

Jimmy Daley from HPE gave this talk at the HPC User Forum in Tucson. “High performance clusters are all about high speed interconnects. Today, these clusters are often built out of a mix of copper and active optical cables. While optical is the future, the cost of active optical cables is 4x – 6x that of copper cables. In this talk, Jimmy Daley looks at the tradeoffs system architects need to make to meet performance requirements at reasonable cost.”

Let’s Talk Exascale: Making Software Development more Efficient

In this episode of Let’s Talk Exascale, Mike Heroux from Sandia National Labs describes the Exascale Computing Project’s Software Development Kit, an organizational approach to reduce the complexity of the project management of ECP software technology. “My hope is that as we create these SDKs and bring these independently developed products together under a collaborative umbrella, that instead of saying that each of these individual products is available independently, we can start to say that an SDK is available.”

Let’s Talk Exascale: Transforming Combustion Science and Technology

In this episode of Let’s Talk Exascale, Jackie Chen from Sandia National Laboratories describes the Combustion-Pele project, which uses predictive simulation for the development of cleaner-burning engines. “Almost all practical combustors operate under extremely high turbulence levels to increase the rate of combustion providing high efficiency, but there are still outstanding challenges in understanding how turbulence affects auto-ignition.”

Radio Free HPC Does the Math on pending CORAL-2 Exascale Machines

In this podcast, the Radio Free HPC team takes a look at daunting performance targets for the DOE’s CORAL-2 RFP for Exascale Computers. “So, 1.5 million TeraFlops divided by 7.8 Teraflops per GPU is how many individual accelerators you need, and that’s 192,307. Now, multiply that by 300 watts per accelerator, and it is clear we are going to need something all-new to get where we want to go.”

HLRS and Wuhan to Collaborate on Exascale Computing

The High-Performance Computing Center Stuttgart (HLRS) and Supercomputing Center of Wuhan University have announced plans to cooperate on technology and training projects. “HLRS and the Supercomputing Center at Wuhan University plan to exchange scientists and to focus on key research topics in high-performance computing. Both sides will also share experience in installing large-scale computing systems, particularly because both Wuhan and Stuttgart aim to develop exascale systems.”

Let’s Talk Exascale: Thom Dunning on Molecular Modeling with NWCHEMEX

In this edition of Let’s Talk Exascale, Thom Dunning from the University of Washington describes the software effort underway to for molecular modeling at exascale with NWCHEMEX. “To date, our work is focused on the redesign of Northwest Chem, but we’ve also explored a number of alternate strategies for implementing the overall redesign as well as the redesign of the algorithms, and this work required access to the ECP computing allocations.”

Balancing the Load – A Million Cores in Concert

“If you’re doing any kind of parallel simulation, and you have a bit of imbalance, all the other cores have to wait for the slowest one,” Junghans says, a problem that compounds as the computing system’s size grows. “The bigger you go on scale, the more these tiny imbalances matter.” On a system like LANL’s Trinity supercomputer up to 999,999 cores could idle, waiting on a single one to complete a task.

HPC Market Update from Hyperion Research

In this video from the HPC User Forum in Tucson, Earl Joseph from Hyperion Research presents an HPC Market Update. “Hyperion Research is the new name for the former IDC high performance computing analyst team. As Hyperion Research, we continue all the worldwide activities that spawned the world’s most respected HPC industry analyst group.”

How Exascale will Move Earthquake Simulation Forward

In this video from the HPC User Forum in Tucson, David McCallen from LBNL describes how exascale computing capabilities will enhance earthquake simulation for improved structural safety. “With the major advances occurring in high performance computing, the ability to accurately simulate the complex processes associated with major earthquakes is becoming a reality. High performance simulations offer a transformational approach to earthquake hazard and risk assessments that can dramatically increase our understanding of earthquake processes and provide improved estimates of the ground motions that can be expected in future earthquakes.”