Sign up for our newsletter and get the latest HPC news and analysis.

Mellanox InfiniBand Message Rate Performance

Mellanox today released some new benchmark data with respect to their MPI performance on node-to-node communication.  The hardware used in the test was Mellanox’s ConnectX-2 adapters and IS5000 switches.  According to the release, they achieved nearly 90 million messages per second.

Mellanox logoDelivering the highest node-to-node MPI message rate coupled with complete transport offload and MPI accelerations such as MPI collectives offload enable HPC users to build balanced, very efficient, bottleneck-free CPU/GPU systems,” said Gilad Shainer, senior director of HPC and technical computing. “Mellanox’s high-performance interconnect solutions are designed to support the growing needs of scientists and researchers worldwide with higher application performance, faster parallel communications and the highest scalable message rate.”

90 million messages per second is serious speed.

For more info, read their full release here.

Trackbacks

  1. [...] Monday of this week, we covered a quick press release from Mellanox regarding their message performance.  Specifically, it [...]

Resource Links: