OSU Releases MVAPICH 1.4

Print Friendly, PDF & Email

What better than some new software to obfuscate your Friday morning?  Dhabaleswar Panda and his crack team of MPI nuts have announced the latest release of their MPICH for Infiniband MPI distribution, MVAPICH.  Anyone who does MPI work on Infiniband machines has probably come to toy with MVAPICH at some point in time.  Version 1.4 is based on the MPI 2.1 standard and the MPICH core MPICH2 1.0.8p1.

MVAPICH2 1.4 series provides many features including scalable and robust daemon-less job startup, full autoconf-based configuration, enhanced processor affinity with PLPA, message coalescing, dynamic process migration, process-level fault-tolerance with checkpoint-restart, network-level fault-tolerance with Automatic Path Migration (APM), RDMA CM support, iWARP support, optimized collectives, on-demand connection management, multi-pathing, RDMA Read-based and RDMA-write-based designs, polling and blocking-based communication progress, multi-core optimized and scalable shared memory support, LiMIC2-based kernel-level shared memory support and memory hook with ptmalloc2 library support. The ADI-3-level design of MVAPICH2 1.4 series supports many features including: MPI-2 functionalities (one-sided, dynamic process management, collectives and datatype), multi-threading and all MPI-1 functionalities. It also supports a wide range of platforms (architecture, OS, compilers, InfiniBand adapters, iWARP adapters, RDMAoE adapters and network adapters supporting uDAPL interface).

The current 1.4 release supports the following underlying transport interfaces:

  • OpenFabrics-IB: This interface supports all InfiniBand compliant devices based on the OpenFabrics Gen2 layer. This interface has the most features and is most widely used. For example, this interface can be used over all Mellanox InfiniBand adapters, IBM eHCA adapters and Qlogic adapters
  • OpenFabrics-iWARP: This interface supports all iWARP compliant devices supported by OpenFabrics. For example, this layer supports Chelsio T3 adapters with the native iWARP mode.
  • OpenFabrics-RDMAoE: This interface supports the emerging RDMAoE (RDMA over Ethernet) interface for Mellanox ConnectX-EN adapters with 10GigE switches
  • QLogic InfiniPath: This interface provides native support for InfiniPath adapters from QLogic over PSM interface. It provides high-performance point-to-point communication for both one-sided and two-sided operations.
  • uDAPL: This interface supports all network-adapters and software stacks which implement the portable DAPL interface from the DAT Collaborative. For example, this interface can be used over all Mellanox adapters, Chelsio adapters and NetEffect adapters. It can also be used with Solaris uDAPL-IBTL implementation over InfiniBand adapters
  • TCP/IP: The standard TCP/IP interface (provided by MPICH2) to work with a range of network adapters supporting TCP/IP interface. This interface can be used with IPoIB (TCP/IP over InfiniBand network) support of InfiniBand also. However, it will not deliver good performance/ scalability as compared to the other interfaces

The latest release is available at a public MVAPICH subversion repository near you.  For more info, check out the full list of features here and real their official release announcement here.