Today Mellanox announced the ConnectX-4 single/dual-port 100Gb/s Virtual Protocol Interconnect (VPI) adapter, the final piece to the industry’s first complete end-to-end 100Gb/s InfiniBand interconnect solution. Doubling the throughput of the previous generation, the ConnectX-4 adapter delivers the consistent, high-performance and low latency required for HPC, cloud, Web 2.0 and enterprise applications to process and fulfill requests in real-time.
Mellanox’s ConnectX-4 VPI adapter delivers 10, 20, 25, 40, 50, 56 and 100Gb/s throughput supporting both the InfiniBand and the Ethernet standard protocols, and the flexibility to connect any CPU architecture – x86, GPU, POWER, ARM, FPGA and more. With world-class performance at 150 million messages per second, latency of 0.7usec, and smart acceleration engines such as RDMA, GPUDirect and SR-IOV, ConnectX-4 will enable the most efficient compute and storage platforms.
Large-scale clusters have incredibly high demands and require extremely low latency and high bandwidth,” said Jorge Vinals, director at Minnesota Supercomputing Institute of the University of Minnesota. “Mellanox’s ConnectX-4 will provide us with the node-to-node communication and real-time data retrieval capabilities we needed to make our EDR InfiniBand cluster the first of its kind in the U.S. With 100Gb/s capabilities, the EDR InfiniBand large-scale cluster will become a critical contribution to research at the University of Minnesota.”
ConnectX-4 adapters provide enterprises with a scalable, efficient and high-performance solution for cloud, Web 2.0, HPC and storage applications. The new adapter supports the new RoCE v2 (RDMA) specification, the full variety of overlay networks technologies – NVGRE (Network Virtualization using GRE), VXLAN (Virtual Extensible LAN), GENEVE (Generic Network Virtualization Encapsulation), and MPLS (Multi-Protocol Label Switching), and storage offloads such as T10-DIF and RAID offload, and more.
ConnectX-4 adapters will begin sampling with select customers in Q1 2015.
Sign up for our insideHPC Newsletter.