Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Mellanox InfiniBand Delivers up to 250 Percent Higher ROI for HPC

Today Mellanox announced that EDR 100Gb/s InfiniBand solutions have demonstrated from 30 to 250 percent higher HPC applications performance versus Omni-Path. These performance tests were conducted at end-user installations and Mellanox benchmarking and research center, and covered a variety of HPC application segments including automotive, climate research, chemistry, bioscience, genomics and more.

InfiniBand solutions enable users to maximize their data center performance and efficiency versus proprietary competitive products. EDR InfiniBand enables users to achieve 2.5X higher performance while reducing their capital and operational costs by 50 percent,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “As a standard and intelligent interconnect, InfiniBand guarantees both backward and forward compatibility, and delivers optimized data center performance to users for any compute elements – whether they include CPUs by Intel, IBM, AMD or ARM, or GPUs or FPGAs. Utilizing the InfiniBand interconnect, companies can gain a competitive advantage, reducing their product design time while saving on their needed data center infrastructure.”

Examples of extensively used mainstream HPC applications:

  • GROMACS is a molecular dynamics package design for simulations of proteins, lipids and nucleic acids and is one of the fastest and broadly used applications for chemical simulations. GROMACS has demonstrated a 140 percent performance advantage on an InfiniBand-enabled 64-node cluster.
  • NAMD is highly noted for its parallel efficiency and is used to simulate large biomolecular systems and plays an important role in modern molecular biology. Using InfiniBand, the NAMD application has demonstrated a 250 percent performance advantage on a 128-node cluster.
  • LS-DYNA is an advanced multi-physics simulation software package used across automotive, aerospace, manufacturing and bioengineering industries. Using InfiniBand interconnect, the LS-DYNA application has demonstrated a 110 percent performance advantage running on a 32-node cluster.

Due to its scalability and offload technology advantages, InfiniBand has demonstrated higher performance utilizing just 50 percent of the needed data center infrastructure and thereby enabling the industry’s lowest Total Cost of Ownership (TCO) for these applications and HPC segments. For the GROMACS application example, a 64-node InfiniBand cluster delivers 33 percent higher performance in comparison to a 128-node Omni-Path cluster; for the NAMD application, a 32-node InfiniBand cluster delivers 55 percent higher performance in comparison to a 64-node Omni-Path cluster; and for the LS-DYNA application, a 16-node InfiniBand cluster delivers 75 percent higher performance than a 32 node Omni-Path cluster.

The application testing was conducted utilizing end-user data centers and the Mellanox benchmarking and research center. The full report of testing conducted at end-user data centers and the Mellanox benchmarking and research center will be available on the Mellanox web site. For more information please contact Mellanox Technologies.

Sign up for our insideHPC Newsletter

Comments

  1. Andrew James says:

    This corresponds the fact that 90% of recent studies that showed 73% of all performance statistics are made up…. Mellanox at its finest! Just percentile with no data?!?!? Wait… Ccoming soon, right?

    Meanwhile, HPE, Dell EMC, Huawei, Lenovo, etc, all say that the tech is 1:1 on par!! AND they all have benchmark data to prove it!

    • Gilad Shainer says:

      The testing were done at end-user sites by one of what you call “all”. There are enough examples out there not just from Mellanox that show the difference in performance between the 2 options. Will be interesting to see the benchmarks that prove wrong the other benchmarks.

Resource Links: