Lawrence Berkeley’s Marques to Run Exascale Computing Project’s Training & Productivity Effort

April 26, 2022 — Osni Marques, a staff scientist in the Applied Math and Computational Research Division at Lawrence Berkeley National Laboratory (Berkeley Lab), has been tapped to lead the Training & Productivity (T&P) effort within the US Department of Energy’s Exascale Computing Project. He takes over for Ashley Barker of Oak Ridge National Laboratory. Marques has been […]

ExaIO: Access and Manage Storage of Data Efficiently and at Scale on Exascale Systems

As the word exascale implies, the forthcoming generation exascale supercomputer systems will deliver 1018 flop/s of scalable computing capability. All that computing capability will be for naught if the storage hardware and I/O software stack cannot meet the storage needs of applications running at scale—leaving applications either to drown in data when attempting to write to storage or starve while waiting to read data from storage. Suren Byna, PI of the ExaIO project in the Exascale Computing Project (ECP) and computer staff scientist at Lawrence Berkeley National Laboratory, highlights the need for preparation to address the I/O needs of exascale supercomputers by noting that storage is typically the last subsystem available for testing on these systems.

Exascale Computing Project Releases New Version of Extreme-Scale HPC Scientific Software Stack

The Extreme-scale Scientific Software Stack (E4S) high-performance computing (HPC) software ecosystem—an ongoing broad collection of software capabilities continually developed to address emerging scientific needs for the US Department of Energy community—recently released version 22.02. E4S, which began in the fall of 2018, is aimed at accelerating the development, deployment, and use of HPC software, thereby […]

Exascale: Preparing PETSc/TAO Software for Scientific Applications

In this episode of the Let’s Talk Exascale podcast, produced by  DOE’s Exascale Computing Project, the topic is PETSc—the Portable, Extensible Toolkit for Scientific Computation. It’s a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. A team within ECP is preparing PETSc/TAO for exascale […]

Exascale Computing Project Issues Final Assessment on PathForward Program for U.S. Industry

The PathForward element of Exascale Computing Project, established to prepare the US industry for exascale system procurements and generally improve US competitiveness in the worldwide computing market. The report is available through the US Department of Energy Office of Scientific and Technical Information. Here’s a summary of the report: A competitive PathForward RFP (Request for Proposals) was released […]

Exascale Computing Project Brings Hardware-Accelerated Optimizations to MPICH Library

The MPICH library is one of the most popular implementations of MPI.[i] Primarily developed at Argonne National Laboratory (ANL) with contributions from external collaborators, MPICH has adhered to the idea of delivering a high-performance MPI library by working closely with vendors in which the MPICH software provides the link between the MPI interface used by applications programmers and vendors who provide low-level hardware acceleration for their network devices. Yanfei Guo (Figure 1), the principal investigator (PI) of the Exascale MPI project in the Exascale Computing Project (ECP) and assistant computer scientist at ANL, is following this tradition. According to Guo, “The ECP MPICH team is working closely with vendors to add general optimizations—optimizations that will work in all situations—to speed MPICH and leverage the capabilities of accelerators, such as GPUs.”

Exascale Computing Project Issues Application Development Report

February 11, 2022 — The Exascale Computing Project (ECP) has issued a milestone report summarizes the status of all 30 ECP Application Development (AD) subprojects at the end of FY20. The 222-page report can be obtained from the ECP website. In October and November of 2020, a comprehensive assessment of AD projects was conducted by […]

Exascale: Rumors Circulate HPC Community Regarding Frontier’s Status

By now you may have expected a triumphant announcement from the U.S. Department of Energy that the Frontier supercomputer, slated to be installed by the end of 2021 as the first U.S. exascale-class system, has been stood up with all systems go. But as of now, DOE (whose Oak Ridge National Laboratory will house Frontier) […]

ECP Brings Visualization Software to Exascale and GPU-accelerated HPC Systems

The development of the VTK-m toolkit, a scientific visualization toolkit for emerging architectures, is a critical advancement in support of scientific visualization on exascale and GPU-accelerated systems for high-performance computing (HPC) users. VTK-m is needed because—counterintuitively—GPUs currently have software challenges when supporting large-scale scientific visualization tasks. For historical reasons, their massively multithreaded architecture, separate memory subsystems, and advent of new visualization workflows, such as in situ and in transit visualization, that bypass data movement for big-data simulations are currently problematic for scientific visualization.

Frontier: OLCF’S Justin Whitt on Deploying the First Exascale Supercomputer

In this installment of the Let’s Talk Exascale podcast series produced by the Department of Energy’s Exascale Computing Project, Justin Whitt, program director of the Oak Ridge Leadership Computing Facility, discusses deployment of Frontier, the first U.S. exascale supercomputer. The system, built by HPE-Cray and powered by AMD microprocessors, is scheduled to be installed by […]