Exascale Computing Project Issues Final Assessment on PathForward Program for U.S. Industry

The PathForward element of Exascale Computing Project, established to prepare the US industry for exascale system procurements and generally improve US competitiveness in the worldwide computing market. The report is available through the US Department of Energy Office of Scientific and Technical Information. Here’s a summary of the report: A competitive PathForward RFP (Request for Proposals) was released […]

Exascale Computing Project Brings Hardware-Accelerated Optimizations to MPICH Library

The MPICH library is one of the most popular implementations of MPI.[i] Primarily developed at Argonne National Laboratory (ANL) with contributions from external collaborators, MPICH has adhered to the idea of delivering a high-performance MPI library by working closely with vendors in which the MPICH software provides the link between the MPI interface used by applications programmers and vendors who provide low-level hardware acceleration for their network devices. Yanfei Guo (Figure 1), the principal investigator (PI) of the Exascale MPI project in the Exascale Computing Project (ECP) and assistant computer scientist at ANL, is following this tradition. According to Guo, “The ECP MPICH team is working closely with vendors to add general optimizations—optimizations that will work in all situations—to speed MPICH and leverage the capabilities of accelerators, such as GPUs.”

Exascale Computing Project Issues Application Development Report

February 11, 2022 — The Exascale Computing Project (ECP) has issued a milestone report summarizes the status of all 30 ECP Application Development (AD) subprojects at the end of FY20. The 222-page report can be obtained from the ECP website. In October and November of 2020, a comprehensive assessment of AD projects was conducted by […]

Exascale: Rumors Circulate HPC Community Regarding Frontier’s Status

By now you may have expected a triumphant announcement from the U.S. Department of Energy that the Frontier supercomputer, slated to be installed by the end of 2021 as the first U.S. exascale-class system, has been stood up with all systems go. But as of now, DOE (whose Oak Ridge National Laboratory will house Frontier) […]

ECP Brings Visualization Software to Exascale and GPU-accelerated HPC Systems

The development of the VTK-m toolkit, a scientific visualization toolkit for emerging architectures, is a critical advancement in support of scientific visualization on exascale and GPU-accelerated systems for high-performance computing (HPC) users. VTK-m is needed because—counterintuitively—GPUs currently have software challenges when supporting large-scale scientific visualization tasks. For historical reasons, their massively multithreaded architecture, separate memory subsystems, and advent of new visualization workflows, such as in situ and in transit visualization, that bypass data movement for big-data simulations are currently problematic for scientific visualization.

Frontier: OLCF’S Justin Whitt on Deploying the First Exascale Supercomputer

In this installment of the Let’s Talk Exascale podcast series produced by the Department of Energy’s Exascale Computing Project, Justin Whitt, program director of the Oak Ridge Leadership Computing Facility, discusses deployment of Frontier, the first U.S. exascale supercomputer. The system, built by HPE-Cray and powered by AMD microprocessors, is scheduled to be installed by […]

Exascale Hardware Evaluation: Workflow Analysis for Supercomputer Procurements

It is well known in the high-performance computing (HPC) community that many (perhaps most) HPC workloads exhibit dynamic performance envelopes that can stress the memory, compute, network, and storage capabilities of modern supercomputers. Optimizing HPC workloads to run efficiently on existing hardware systems is challenging, but attempting to quantify the performance envelopes of HPC workloads to extrapolate performance predictions for HPC workloads on new system architectures is even more challenging, albeit essential. This predictive analysis is beneficial because it helps each data center’s supercomputer procurement team extrapolate to the new machines and system architectures that will deliver the most performance for production workloads at their datacenter. However, once a supercomputer is installed, configured, made available to users, and benchmarked, it is too late to consider fundamental architectural changes.

ExaWind: How Exascale-class HPC Will Help Optimize Skyscraper-sized Wind Turbines of the Future

A wind power revolution is blowing through the U.S. electrical industry, and exascale-class supercomputing is expected to play an increasingly instrumental role in its growth. The Energy Information Administration puts wind power’s share of America’s electricity generation at 8.4 percent in 2020, up from less than 1 percent in 1990. Increasingly competitive on cost and […]

Let’s Talk Exascale: Chandrasekaran on Teaching Supercomputing and Leading ECP’s SOLLVE Project

In this episode of the Exascale Computing Project’s Let’s Talk Exascale, the ECP’s Scott Gibson interviewed Sunita Chandrasekaran, the new principal investigator of the ECP SOLLVE (Scaling OpenMP With LLVm for Exascale Performance and Portability) project. She replaces Barbara Chapman in the role, whom ECP Software Technology Director Mike Heroux said has been an invaluable […]

Let’s Talk Exascale: How Flux Software Manages Supercomputing Workflows

This episode of the Exascale Computing Project‘s Let’s Talk Exascale podcast series delves into a software framework developed at Lawrence Livermore National Laboratory (LLNL), called Flux, which is widely used for scheduling and managing modern supercomputing and HPC workflows. Joining in the discussion are Lawrence Livermore’s Dong Ahn, Stephen Herbein, Dan Milroy and Tapasya Patki […]