Mellanox Rolls Out EDR InfiniBand Routers

gfx03260Today Mellanox announced a new line of InfiniBand router systems. The new EDR 100Gb/s InfiniBand Routers enable a new level of scalability critical for the next generation of mega data-center deployments as well as expanded capabilities for data center isolations between different users and applications. The network router delivers a consistent, high-performance and low latency router solution that is mission critical for high performance computing, cloud, Web 2.0, machine learning and enterprise applications.

The SB7780 InfiniBand Router adds another layer to Mellanox’s solutions that pave the road to Exascale solutions,” said Gilad Shainer, vice president of Marketing at Mellanox. “This new InfiniBand Router gives us the ability to scale up to a virtually unlimited number of nodes and yet sustain the data processing demands of machine learning, IoT, HPC and cloud applications. Mellanox’s EDR 100Gb/s InfiniBand solutions, together with the SB7780 router, represent the only scalable solution currently available on the market that support these needs.”

Mellanox’s SB7780 InfiniBand Router family is based on the Switch-IB™ switch ASIC and offers fully flexible 36 EDR 100Gb/s ports, which can be split among six different subnets. The InfiniBand Router brings two major enhancements to the Mellanox switch portfolio:

  • Increases resiliency by segregating the data center’s network into several subnets; with each subnet running its own subnet-manager (SM) thereby effectively isolating each subnet, and thus providing better availability and stability.
  • Enables scaling the fabric up to a virtually unlimited number of nodes.

The SB7780 InfiniBand Router can connect between different types of topologies. Therefore, it enables each subnet topology to best fit and maximize each applications’ performance. For example, the storage subnets may use a Fat-Tree topology while the compute subnets may use 3D-torus, DragonFly+, Fat-Tree or other topologies that best fit the local application. The SB7780 can also help split the cluster in order to segregate between applications that run best on localized resources and between applications that require a full fabric.

This new technology will allow us to enable isolation between high-performance compute systems while allowing access to our center-wide storage resources, and allow us to continue to expand our connectivity to meet future needs,” Scott Atchley, HPC Systems Engineer at Oak Ridge National Laboratory.

Sign up for our insideHPC Newsletter