Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: Evolving MPI for Exascale Applications

In this episode of Let’s Talk Exascale, Pavan Balaji and Ken Raffenetti describe their efforts to help MPI, the de facto programming model for parallel computing, run as efficiently as possible on exascale systems. “We need to look at a lot of key technical challenges, like performance and scalability, when we go up to this scale of machines. Performance is one of the biggest things that people look at. Aspects with respect to heterogeneity become important.”

Amazon and Libfabric: A case study in flexible HPC Infrastructure

Brian Barrett from Amazon gave this talk at the 2018 OpenFabrics Workshop. “As network performance becomes a larger bottleneck in application performance, AWS is investing in improving HPC network performance. Our initial investment focused on improving performance in open source MPI implementations, with positive results. Recently, however, we have pivoted to focusing on using libfabric to improve point to point performance.”

Podcast: Open MPI for Exascale

In this Let’s Talk Exascale podcast, David Bernholdt from ORNL discusses the Open MPI for Exascale project, which is focusing on the communication infrastructure of MPI, or message-passing interface, an extremely widely used standard for interprocessor communications for parallel computing. “It’s possible that even though applications may make millions or billions of short calls to the MPI library during the course of an execution, performance improvements can have a significant overall impact on the application runtime.”

Sylabs Startup forms Commercial Entity behind Singularity for HPC

Today an HPC Startup called Sylabs entered the market to provide solutions and services based on Singularity, an open source container technology designed for high performance computing. Founded by the inventor and project lead for Singularity, Sylabs will license and support Singularity Pro, an enterprise version of the software, and introduce it to businesses in the enterprise and HPC commercial markets.

Video: How MVAPICH & MPI Power Scientific Research

Adam Moody from LLNL presented this talk at the MVAPICH User Group. “High-performance computing is being applied to solve the world’s most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand.”

Overview of the MVAPICH Project and Future Roadmap

“This talk will provide an overview of the MVAPICH project (past, present and future). Future roadmap and features for upcoming releases of the MVAPICH2 software family (including MVAPICH2-X, MVAPICH2-GDR, MVAPICH2-Virt, MVAPICH2-EA and MVAPICH2-MIC) will be presented. Current status and future plans for OSU INAM, OEMT and OMB will also be presented.”

Challenges and Opportunities for HPC Interconnects and MPI

“This talk will reflect on prior analysis of the challenges facing high-performance interconnect technologies intended to support extreme-scale scientific computing systems, how some of these challenges have been addressed, and what new challenges lay ahead. Many of these challenges can be attributed to the complexity created by hardware diversity, which has a direct impact on interconnect technology, but new challenges are also arising indirectly as reactions to other aspects of high-performance computing, such as alternative parallel programming models and more complex system usage models.”

Internode Programming With MPI and Intel Xeon Phi Processor

“While MPI was originally developed for general purpose CPUs and is widely used in the HPC space in this capacity, MPI applications can also be developed and then deployed with the Intel Xeon Phi Processor. With the understanding of the algorithms that are used for a specific application, tremendous performance can be achieved by using a combination of OpenMP and MPI.”

OSC Hosts fifth MVAPICH Users Group

A broad array of system administrators, developers, researchers and students who share an interest in the MVAPICH open-source library for high performance computing will gather this week for the fifth meeting of the MVAPICH Users Group (MUG). “Dr. Panda’s library is a cornerstone for HPC machines around the world, including OSC’s systems and many of the Top 500,” said Dave Hudak, Ph.D., interim executive director of OSC. “We’ve gained a lot of insight and expertise from partnering with DK and his research group throughout the years.”

Test Your Knowledge with the MPI Quiz

In this video, David Henty from EPCC conducts a video-based quiz on MPI. “The multiple-choice questions are partly designed for fun to test attendees’ knowledge, but are mainly aimed at promoting discussion about MPI and its usage in real applications. All that is assumed is a working knowledge of basic MPI functionality: send, receive, collectives, derived datatypes and non-blocking communications.”