What is MPI?

Print Friendly, PDF & Email

The Message-Passing Interface is a standard for programming distributed-memory parallel computers. Open source implementations such as Open MPI and MPICH2 are available for many platform, so software developed with MPI tends to be portable.

MPI has both one-sided and two-sided message semantics, in addition to collective communication routines. MPI also supports a private context for communication to prevent interference between two software packages that are running at the same time, much as the virtual interface architecture does.

There are a number of drawbacks regarding MPI though. Firstly, it is a committee-defined standard, much like Ada and InfiniBand. For this reason, it is a large and monolithic library. Among the over two-hundred functions are primitives for IO and process management, features that have dubious value for a communications library. Furthermore, the one-sided message semantics are not very flexible and require explicit pinning from the end-user. And finally, MPI does not support message-driven execution, such as with RPCs or CORBA, which means that the end user must check for message progress via polling or blocking, not callbacks.

Compounding these troubles is that MPI is heavily dependent on the quality of the underlying implementation. As with pthread, the user may be forced to design around flaws in the software, especially if the implementation does not feature thread safety or independent message progress. So much for portability.

That said, MPI is currently the most widely available solution for parallel programming at the moment. Therefore, it may be wise to at least be familiar with this system.