Sign up for our newsletter and get the latest HPC news and analysis.

Come to Portland! MPI 3.1 is Just Around the Corner

Jeff Squyres

Over at Cisco’s High Performance Computing Networking Blog, Jeff Squyres writes that MPI 3.1 is coming soon.

Using Advanced MPI: Modern Features of the Message-Passing Interface

We need a reviewer for this book!

“These authors are experts in MPI, but more importantly, they are experts at teaching MPI. If you want to master MPI, there no better guides than this book and its companion.”

Bill Gropp Presents: MPI for Scalable Computing

Bill Gropp

In this video from the 2014 Argonne Training Program on Extreme-Scale Computing, Bill Gropp from NCSA presents: Cost of Unintended Synchronization. “At ATPESC 2014, we captured 67 hours of lectures in 86 videos of presentations by pioneers and elites in the HPC community on topics ranging from programming techniques and numerical algorithms best suited for leading-edge HPC systems to trends in HPC architectures and software most likely to provide performance portability through the next decade and beyond.”

HPC Thought Leaders Publish New Book on Using Advanced MPI


A new MPI book is available for pre-order on Amazon. Written by William Gropp, Torsten Hoefler, Ewing Lusk, and Rajeev Thakur, Using Advanced MPI: Modern Features of the Message-Passing Interface offers a practical guide to the advanced features of the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. It covers new features added in MPI-3, the latest version of the MPI standard, and updates from MPI-2.

MVAPICH: Still Saving the World – Now Even Faster

Adam Moody, LLNL

“MPI is in the national interest. The U.S. government tasks Lawrence Livermore National Laboratory with solving the nation’s and the world’s most difficult problems. This ranges from global security, disaster response and planning, drug discovery, energy production, and climate change to name a few. To meet this challenge, LLNL scientists utilize large-scale computer simulations on Linux clusters with Infiniband networks. As such, MVAPICH serves a critical role in this effort. In this talk, I will highlight some of this recent work that MVAPICH has enabled.”

Stodgy MPI Shows its Age

Andreas Schäfer

Over at the GentryX Blog, Andreas Schäfer writes that while MPI is the most widely used programming model in HPC, its inflexibility is giving away its age and contenders are flexing their muscles. “MPI is not becoming easier to use, but harder.”

Slidecast: MPI Requirements of the Network Layer


Jeff Squyres from Cisco describes the proposed successor to the Linux verbs API that is designed to better serve the needs of MPI. “It’s not libibverbs 2.0 — it’s a new API that aims to both expand the scope of what libibverbs did, and also to address many of its much-criticized shortcomings.”

A Gentle Introduction to MPI


PRACE has developed an extensive series of online HPC Tutorials. In this video, CSC provides a basic introduction to parallel programming concepts such as task/data parallelism and an introduction to parallel programming concepts such as parallel scaling and Amdahl’s law.

Torsten Hoefler on How the SC13 Best Paper Came Together


“In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads.”

A New MPICH ABI Compatibility Initiative

As announced at SC13, the producers of several notable MPICH-derived MPI implementations have begun a collaboration with the explicit goal of maintaining ABI compatibility between their implementations. Without such compatibility between implementations,” said Kenneth Raffenetti, a software developer in Argonne’s Mathematics and Computer Science Division, “every new release would require application developers to rebuild and […]