By Timothy Prickett Morgan • Get more from this author
Networking chip, adapter card and switch maker Mellanox is rounding out its converged InfiniBand-Ethernet product line with the debut of the ConnectX-3 integrated circuits and network adapter cards built using the chips.
Mellanox has been selling multi-protocol chips and adapter cards for servers for a number of years, and back in April the company announced its first switch-hitting chips, called SwitchX, to implement both 40Gb/sec Ethernet and 56Gb/sec InfiniBand on the same piece of silicon. Those SwitchX chips came to market in May at the heart of the sx1000 line of 40GE Ethernet switches. Later this year, the SwitchX silicon will be used to make a line of InfiniBand switches and eventually, when the multiprotocol software is fully cooked, will come out in a line of switches that can dynamically change from Ethernet and InfiniBand on a port-by-port basis.
The long-term goal at Mellanox – and one of the reasons it bought two-timing InfiniBand and Ethernet switch maker Voltaire back in November for $218m – is to allow customers to wire once and switch protocols on the server and switch as required by workloads. Mellanox can presumably charge a premium for such capability, and both the SwitchX and ConnectX-3 silicon allows Mellanox to create fixed adapters and switches at specific speeds to target specific customer needs and lower price points, too.
The ConnectX-3 silicon announced today is the first Fourteen Data Rate (FDR, running at 56Gb/sec) InfiniBand adapter chip to come to market. When running the InfiniBand protocol, it supports Remote Direct Memory Access (RDMA); Fibre Channel over InfiniBand (FCoIB); and Ethernet over InfiniBand (EoIB). RDMA is the key feature that lowers latencies on server-to-server links because it allows a server to bypass the entire network stack and reach right into the main memory of an adjacent server over InfiniBand links and grab some data.
The ConnectX-3 chip supports InfiniBand running at 10Gb/sec, 20GB/sec, 40Gb/sec, and 56Gb/sec speeds. On the Ethernet side, the ConnectX-3 chip implements 10GE or 40GE protocols and supports RDMA over Converged Ethernet (RoCE), Fibre Channel over Ethernet (FCoE), and Data Center Bridging (DCB). The new silicon also supports SR-IOV – an I/O virtualization and isolation standard for Ethernet networks that allows multiple operation systems to share a single PCI device – and IEEE 1588, a standard for synchronizing host server clocks to a master data center clock.
John Monson, vice president of marketing at Mellanox, tells El Reg that the important thing about the ConnectX-3 adapter card chip is that it is tuned to match the bandwidth of the forthcoming PCI-Express 3.0 bus. PCI-Express 3.0 slots are expected to come out with the next generation of servers later this year, and Ethernet and InfiniBand adapter cards usually are created for x8 slots. The ConnectX-3 chip can also be implemented on PCI-Express 1.1 or 2.0 peripherals if companies want to make cards that run at lower speeds on slower buses.
The ConnectX-3 chip is small enough to be implemented as a single chip LAN-on-motherboard (LOM) module, which is perhaps the most important thing for allowing for widespread adoption of 10GE and, later, 40GE networking in data centers. The ConnectX-3 chip includes PHY networking features, so you don’t have to add these to the LOM; all you need are some capacitors and resistors and you are good to go, says Monson. The ConnectX-3 chip will also be used in PCI adapter cards and in mezzanine cards that slide into special slots on blade servers. Hewlett-Packard, IBM, Dell, Fujitsu, Oracle, and Bull all OEM Mellanox silicon, adapter, or mezz cards for the respective server lines to support InfiniBand, Ethernet, or converged protocols. It is not entirely clear if blade server makers will go with their current mezz card designs or implement LOM for 10GE networking. “It will be interesting to see how this will play out,” Monson says.
The ConnectX-3 chip has enough oomph to implement two 56Gb/sec InfiniBand ports, two 40Gb/sec Ethernet ports, or one of each. Obviously, with an x8 PCI-Express 3.0 slot running at 8GT/sec, you have a peak of 64Gb/sec across eight lanes on the bus, and with encoding, you might be down somewhere around 56Gb/sec for a single x8 slot. So putting two FDR InfiniBand or 40GE ports on the same bus could saturate it, depending on the workload. (It is a wonder why network cards are not made to HPC servers that plug into x16 slots, but for whatever reason, they are not.)
Mellanox is happy to sell its ConnectX-3 silicon to anyone who wants to make network adapters, but is keen on selling its own adapters, of course. The ConnectX-3 chip is sampling now and will be generally available in a few months.
Shiny new cards
A bunch of different adapter cards have already been cooked up by Mellanox engineers. One card has a single 40GB/sec (Quad Data Rate) InfiniBand port, one has two QDR ports, another has one FDR port, and yet another has two FDR ports. There is a card coming out from Mellanox that has one FDR port using QSFP cables and one 10GE port using SFP+ cables. If pure Ethernet is your thing, Mellanox has one card with two 10GE ports with SFP+ cables, another with a single 40GE port with a QSFP cable, and yet another with two 40GE ports with QSFP cables.
Mellanox is only beginning its benchmark tests on the ConnectX-3 chips, but Monson says the performance and thermal characteristics of the chips are encouraging. The current ConnectX-2 chips support Ethernet with RoCE extensions have a latency of around 1.3 microseconds per 10GE port compared to something on the order of 5 to 10 microseconds for other commercially available adapters. With the ConnectX-3 chip implemented in an FDR InfiniBand or 40GE port with either RDMA or RoCE turned on will get sub-microsecond latency. And, says Monson, that ConnectX-3 port will consume under 3 watts, which compares favorably to the 8 to 10 watts per port other network interface cards offer today.
The ConnectX-3 chips are supported on the current releases of Linux from SUSE and Red Hat as well as on Microsoft’s Windows Server 2008 (including the HPC edition). VMware’s ESX 3.5, 4.0, and 4.1 hypervisors Citrix Systems’ XenServer 4.1, 5.0 and 5.5 hypervisors will also recognize devices based on these chips. The OpenFabrics drivers for Linux and Windows are also certified on the ConnectX-3 silicon. ®
This article originally appeared in The Register.
[…] (from insideHPC) […]