Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: The ECP Exascale Computing Project

Paul Messina presented this talk at the HPC User Forum in Austin. “The Exascale Computing Project (ECP) is a collaborative effort of the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA). As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a new class of high-performance computing systems whose power will be a thousand times more powerful than today’s petaflop machines.”

Network Co-design as a Gateway to Exascale

Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.

Berkeley Lab to Develop Key Applications for ECP Exascale Computing Project

Today Lawrence Berkeley National Laboratory announced that LBNL scientists will lead or play key roles in developing 11 critical research applications for next-generation supercomputers as part of DOE’s Exascale Computing Project (ECP).

Exascale Computing Project (ECP) Awards $39.8 million for Application Development

“These application development awards are a major first step toward achieving mission critical application readiness on the path to exascale,” said ECP director Paul Messina. “A key element of the ECP’s mission is to deliver breakthrough HPC modeling and simulation solutions that confidently deliver insight and predict answers to the most critical U.S. problems and challenges in scientific discovery, energy assurance, economic competitiveness, and national security,” Messina said. “Application readiness is a strategic aspect of our project and foundational to the development of holistic, capable exascale computing environments.”

Co-Design Offloading

The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.

Paul Messina Presents: A Path to Capable Exascale Computing

Paul Messina presented this talk at the 2016 Argonne Training Program on Extreme-Scale Computing. “The President’s NSCI initiative calls for the development of Exascale computing capabilities. The U.S. Department of Energy has been charged with carrying out that role in an initiative called the Exascale Computing Project (ECP). Messina has been tapped to lead the project, heading a team with representation from the six major participating DOE national laboratories: Argonne, Los Alamos, Lawrence Berkeley, Lawrence Livermore, Oak Ridge and Sandia. The project program office is located at Oak Ridge.

Exascale Computing – What are the Goals and the Baseline?

Thomas Schulthess presented this talk at the MVAPICH User Group. “Implementation of exascale computing will be different in that application performance is supposed to play a central role in determining the system performance, rather than just considering floating point performance of the high-performance Linpack benchmark. This immediately raises the question as to what the yardstick will be, by which we measure progress towards exascale computing. I will discuss what type of performance improvements will be needed to reach kilometer-scale global climate and weather simulations. This challenge will probably require more than exascale performance.”

Radio Free HPC Looks at Alternative Processors for High Performance Computing

In this podcast, the Radio Free HPC team looks at why it’s so difficult for new processor architectures to gain traction in HPC and the datacenter. Plus, we introduce a new regular feature for our show: The Catch of the Week.

Video: Exploring I/O Challenges at Exascale

“Clear trends in the past and current petascale systems (i.e., Jaguar and Titan) and the new generation of systems that will transition us toward exascale (i.e., Aurora and Summit) outline how concurrency and peak performance are growing dramatically, however, I/O bandwidth remains stagnant. In this talk, we explore challenges when dealing with I/O-ignorant high performance computing systems and opportunities for integrating I/O awareness in these systems.”

The Evolution of HPC

“When the history of HPC is viewed in terms of technological approaches, three epochs emerge. The most recent epoch, that of co-design systems, is new and somewhat unfamiliar to many HPC practitioners. Each epoch is defined by a fundamental shift in design, new technologies, and the economics of the day. “A network co-design model allows data algorithms to be executed more efficiently using smart interface cards and switches. As co-design approaches become more mainstream, design resources will begin to focus on specific issues and move away from optimizing general performance.”