Open MPI 2.0.0 Released

Print Friendly, PDF & Email

open-mpiToday the Open MPI Team announced the release of Open MPI version 2.0.0, a major new release series containing many new features and bug fixes.

Increasing the major release number to “2” is indicative the magnitude of the changes in this release: v2.0.0 is effectively a new generation of Open MPI compared to the v1.10 series (see https://www.open-mpi.org/software/ompi/versions/ for a description of Open MPI’s versioning scheme). Many of the changes are visible to users, but equally importantly, there are many changes “under the hood” that add stability and performance improvements to the inner workings of Open MPI.

This release also retires support for some legacy systems, and is not ABI compatible with the v1.10 series. Users will need to recompile their MPI applications to use Open MPI v2.0.0.
As with any new major series, while the Open MPI community has tested the v2.0.0 release extensively, production users are encouraged to test thoroughly when upgrading from a prior version of Open MPI.

Here are a list of the major new features in Open MPI v2.0.0:

  • Open MPI is now MPI-3.1 compliant.
  • Many enhancements to MPI RMA. Open MPI now maps MPI RMA operations
    on to native RMA operations for those networks which support this
    capability.
  • Greatly improved support for MPI_THREAD_MULTIPLE (when configured
    with –enable-mpi-thread-multiple).
  • Enhancements to reduce the memory footprint for jobs at scale. A
    new MCA parameter, “mpi_add_procs_cutoff”, is available to set the
    threshold for using this feature.
  • Completely revamped support for memory registration hooks when using
    OS-bypass network transports.
  • Significant OMPIO performance improvements and many bug fixes.
  • Add support for PMIx – Process Management Interface for Exascale.
    Version 1.1.2 of PMIx is included internally in this release.
  • Add support for PLFS file systems in Open MPI I/O.
  • Add support for UCX transport.
  • Simplify build process for Cray XC systems. Add support for
    using native SLURM.
  • Add a –tune mpirun command line option to simplify setting many
    environment variables and MCA parameters.
  • Add a new MCA parameter “orte_default_dash_host” to offer an analogue
    to the existing “orte_default_hostfile” MCA parameter.
  • Add the ability to specify the number of desired slots in the mpirun
    –host option.

Sign up for our insideHPC Newsletter