New Mellanox Networking Solutions Accelerate NVMe Over Fabrics

Print Friendly, PDF & Email

mellanoxToday Mellanox announced a family of end-to-end networking solutions and software for connecting solid-state storage to the fabric. The Mellanox ConnectX-4 adapter, ConnectX-5 adapter and BlueField family of programmable processors support smart offloads that connect solid state drives (SSDs) directly to the network in the most efficient way possible, thereby simplifying system design and reducing both power and storage system costs.

We’ve seen the rapid evolution of SSDs and have been contributing to the NVMe over Fabrics standard and community drivers,” said Michael Kagan, CTO at Mellanox Technologies. “Because faster storage requires faster networks, we designed the highest-speeds and most intelligent offloads into both our ConnectX-5 and BlueField families. This lets us connect many SSDs directly to the network at full speed, without the need to dedicate many CPU cores to managing data movement, and we provide a complete end-to-end networking solution with the highest-performing 25, 50, and 100GbE switches and cables as well.”

The recently launched ConnectX-5 adapter includes hardware offloads for the newly-approved NVMe Over Fabrics standard to remove the storage system processor from the data path. This enables Flash-based storage platforms to connect more NVMe SSDs than ever before without the burden of adding additional costly CPUs to the system. Both the ConnectX-4 and ConnectX-5 adapters integrate full hardware support of Remote Direct Memory Access (RDMA) over both InfiniBand and Ethernet, at network speeds ideally matched for flash storage, including 25, 40, 50, and 100Gb/s speeds. Utilizing the newly released Resilient RoCE software, NVMe Over Fabrics solutions using Ethernet can be easily deployed in ordinary enterprise data centers.

BlueField combines Mellanox ConnectX-5 network acceleration, RDMA and NVMe over Fabrics offloads, and an advanced PCIe Gen4 switch, with an array of high-performance 64-bit ARMv8 cores in a compact and efficient System on Chip (SoC). The resulting device offers the ideal networked storage controller for solid state Flash arrays; eliminating the need for expensive PCI Express switches and costly CPUs in each SSD enclosure and connecting flash devices directly to the network for the most efficient storage access.

These smart adapters operate seamlessly with the newly released Spectrum SN2100 Ethernet switch, which supports up to 16 ports at 100GbE, 32 ports at 50GbE, and 64 at 25GbE. Its innovative half-width, 1U high form factor allows dual-connectivity to storage arrays, flash shelves, and flash-accelerated servers for high-availability at speeds ideal for supporting flash storage. Mellanox LinkX cables complete the solution with 25, 50, and 100GbE copper and optical cables and transceivers supporting 25, 50, and 100Gb/s speeds over distances from one meter to 100km.

25, 50, and 100GbE are ideal speeds for connecting SSDs to the network,” said Dennis Martin, Storage Performance Expert and President at Demartek. “The newest NVMe SSDs can sustain approximately 25 Gigabits per second of read throughput and have lower latency than earlier generations of SSDs, so the higher bandwidth and lower latency of 25, 50 and 100 Gigabit Ethernet networking is ideal for taking full advantage of their performance.”

Both ConnectX-5 and BlueField offload the NVMe over Fabrics protocol translation to hardware, eliminating the need to route each storage system transaction through the CPU. Both devices integrate a PCIe switch supporting both PCIe Gen3 and PCIe Gen4, allowing a direct connection to current SSDs and also to the next generation of SSDs using fewer lanes to move more data.

Sign up for our insideHPC Newsletter