Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Mellanox Demonstrates Next-Gen 100Gb/s LinkX Cables & Silicon Photonics Transceivers

MellanoxToday Mellanox introduced three new LinkX 100Gb/s solutions with a live demonstration at the OFC 2015 Conference (Booth 2419).

All LinkX 100Gb/s transceivers and cables support the high-density, low-power, QSFP28 connector-based Switch-IB switch platform. The Switch-IB 36-port 100Gb/s InfiniBand switch delivers 7.2Tb/s of aggregate throughput in a 1U, making it the world’s highest performance, ultra dense end-to-end platform. The robustness, density and standard QSFP connectors and cables enables 100Gb/s networks to be as easy to deploy as 10Gb/s.

The Mellanox OFC Demonstration also featured plug and play 100Gb/s LinkX cables and transceivers in end-to-end network using new generation of 7.2Tb/s switches and 100Gb/s adapters. At the core of the demonstration are multiple Mellanox Switch-IB EDR 100Gb/s InfiniBand switches which achieve world record port-to-port latency of less than 90ns. Switch-IB has 36-ports of 100Gb/s to provide 7.2Tb/s of switching capacity and ultra-low latency and power consumption. Compared to the previous generation of InfiniBand switches, Switch-IB delivers nearly twice the throughput per port with half the latency.

Also demonstrated were Mellanox’s ConnectX-4 100Gb/s interconnect adapters which deliver 10, 20, 25, 40, 50, 56 and 100Gb/s throughput supporting both the InfiniBand and the Ethernet standard protocols. ConnectX-4 adapters provide the flexibility to connect any CPU architecture – x86, GPU, POWER, ARM, FPGA and more. With world-class performance at 150 million messages per second, latency of 0.7usec, and smart acceleration engines such as RDMA, GPUDirect and SR-IOV, ConnectX-4 enables the most efficient compute and storage platforms.

Sign up for our insideHPC Newsletter.

Resource Links: