Exascale Computing Project Brings Hardware-Accelerated Optimizations to MPICH Library

The MPICH library is one of the most popular implementations of MPI.[i] Primarily developed at Argonne National Laboratory (ANL) with contributions from external collaborators, MPICH has adhered to the idea of delivering a high-performance MPI library by working closely with vendors in which the MPICH software provides the link between the MPI interface used by applications programmers and vendors who provide low-level hardware acceleration for their network devices. Yanfei Guo (Figure 1), the principal investigator (PI) of the Exascale MPI project in the Exascale Computing Project (ECP) and assistant computer scientist at ANL, is following this tradition. According to Guo, “The ECP MPICH team is working closely with vendors to add general optimizations—optimizations that will work in all situations—to speed MPICH and leverage the capabilities of accelerators, such as GPUs.”

Podcast: Evolving MPI for Exascale Applications

In this episode of Let’s Talk Exascale, Pavan Balaji and Ken Raffenetti describe their efforts to help MPI, the de facto programming model for parallel computing, run as efficiently as possible on exascale systems. “We need to look at a lot of key technical challenges, like performance and scalability, when we go up to this scale of machines. Performance is one of the biggest things that people look at. Aspects with respect to heterogeneity become important.”

Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss the challenges in designing runtime environments for MPI+X (PGAS-OpenSHMEM/UPC/CAF/UPC++, OpenMP and Cuda) programming models by taking into account support for multi-core systems (KNL and OpenPower), high networks, GPGPUs (including GPUDirect RDMA) and energy awareness.”

Bill Gropp Named Acting Director of NCSA

“I am honored to have been asked to drive NCSA’s continuing mission as a world-class, integrative center for transdisciplinary convergent research, education, and innovation,” said Gropp. “Embracing advanced computing and domain collaborations across the University of Illinois at Urbana-Champaign campus and ensuring scientific communities have access to advanced digital resources will be at the heart of these efforts.”

Marc Snir on Why Argonne is Part of the OpenHPC Community

Dr. Marc Snir discusses why Argonne is participating in the OpenHPC Community. “OpenHPC can be a good mechanism to make sure all the pieces of open source software in HPC fit well together. It’s an important initiative that can bring together the HPC open source software community. It can make sure that a full stack of HPC software is available in a useful manner to the user community.”