HPC News Bytes 20230828: Gartner on Composable; An ABI for MPI; AMD Acquires Mipsology; Google Cloud HPC Clone

A happy Monday morn to you. This week’s HPC News Bytes offers a quick (4:53) run-through of the major news in our sector over the past week. This morning we look at: Gartner predicts accelerated growth for composable computing; Proposed ABI (application binary interface) for MPI to simplify parallel apps; AMD buys AI software company Mipsology – a sign of more AI M&A activity to come?; HPC clone on Google Cloud Platform

Lenovo HPC Powers SPEChpc™ 2021 with AMD 3rd Generation EPYC™ Processors

As a leader in high performance computing, Lenovo continually supports the Standard Performance Evaluation Corporation (SPEC) benchmarks, that would help customers make better-informed decisions for their HPC workloads. SPEChpc™ 2021 is a newly released benchmark suite from SPEC that produces industry-standard benchmarks for the newest generation of computer systems. What separates SPEChpc™ 2021 from SPEC CPU® 2017, SPEC MPI® 2007 or the other SPEC benchmark suites is that SPEChpc™ 2021 is one-of-a-kind benchmark suite which uses real-world applications that support “multiple programming models and offloading” to evaluate the performance of state-of-the-art heterogenetic HPC systems.

Exascale Computing Project Brings Hardware-Accelerated Optimizations to MPICH Library

The MPICH library is one of the most popular implementations of MPI.[i] Primarily developed at Argonne National Laboratory (ANL) with contributions from external collaborators, MPICH has adhered to the idea of delivering a high-performance MPI library by working closely with vendors in which the MPICH software provides the link between the MPI interface used by applications programmers and vendors who provide low-level hardware acceleration for their network devices. Yanfei Guo (Figure 1), the principal investigator (PI) of the Exascale MPI project in the Exascale Computing Project (ECP) and assistant computer scientist at ANL, is following this tradition. According to Guo, “The ECP MPICH team is working closely with vendors to add general optimizations—optimizations that will work in all situations—to speed MPICH and leverage the capabilities of accelerators, such as GPUs.”

Videos, Slides from MUG ’20 Now Available

The MVAPICH User Group Meeting (MUG ’20), built around an implementation of the MPI standard developed by Ohio State University, has posted videos and slides from presentations on a variety of topics at its recent annual conference. The program included keynote talks from Brian van Essen from Lawrence Livermore National Labs and Michael Norman from San Diego Supercomputing […]

GPCNeT or GPCNoT?

In this special guest feature, Gilad Shainer from Mellanox Technologies writes that the new GPCNeT benchmark is actually a measure of relative performance under load rather than a measure of absolute performance. “When it comes to evaluating high-performance computing systems or interconnects, there are much better benchmarks available for use. Moreover, the ability to benchmark real workloads is obviously a better approach for determining system or interconnect performance and capabilities. The drawbacks of GPCNeT benchmarks can be much more than its benefits.”

Distributed HPC Applications with Unprivileged Containers

Felix Abecassis and Jonathan Calmels from NVIDIA gave this talk at FOSDEM 2020. “We will present the challenges in doing distributed deep learning training at scale on shared heterogeneous infrastructure. At NVIDIA, we use containers extensively in our GPU clusters for both HPC and deep learning applications. We love containers for how they simplify software packaging and enable reproducibility without sacrificing performance.”

Geoffrey C. Fox to receive Ken Kennedy Award at SC19

Today ACM/IEEE named Geoffrey C. Fox of Indiana University Bloomington as the recipient of the 2019 ACM-IEEE CS Ken Kennedy Award. “Fox was cited for foundational contributions to parallel computing methodology, algorithms and software, and data analysis, and their interfaces with broad classes of applications. The award will be presented at SC19 in Denver.”

Checkpointing the Un-checkpointable: MANA and the Split-Process Approach

Gene Cooperman from Northeastern University gave this talk at the MVAPICH User Group. “This talk presents an efficient, new software architecture: split processes. The “MANA for MPI” software demonstrates this split-process architecture. The MPI application code resides in “upper-half memory”, and the MPI/network libraries reside in “lower-half memory”.

Video: Three Perspectives on Message Passing

Robert Harrison from Brookhaven gave this talk at the MVAPICH User Group. “MADNESS, TESSE/EPEXA, and MolSSI are three quite different large and long-lived projects that provide different perspectives and driving needs for the future of message passing. All three of these projects employ MPI and have a vested interest in computation at all scales, spanning the classroom to future exascale systems.”

Benchmarking MPI Applications in Singularity Containers on Traditional HPC and Cloud Infrastructures

Andrei Plamada from ETH Zurich gave this talk at the hpc-ch forum on Cloud and Containers. “Singularity is a container solution that promises to both integrate MPI applications seamlessly and run containers without privilege escalation. These benefits make Singularity a potentially good candidate for the scientific high-performance computing community. However, the performance overhead introduced by Singularity is unclear. In this work we will analyze the overhead and the user experience on both traditional HPC and cloud infrastructures.”