The latest HPC Projects has an article that describes the basics of InfiniBand interconnects that anyone who is considering building a cluster — or already has an Ethernet cluster — needs to know. We seem to have a theme running through recent posts on this IB/E debate these days…
For many people configuring an HPC system, their first thought when it comes to interconnect might be to use the Ethernet ports that are standard on virtually every server and thus are essentially ‘free’, because they seemingly don’t add any cost to the system. But does this logic really hold water?
In many cases, it does not. By adding an interconnect that transfers data at much greater speeds and with lower latency, you can improve system performance to the point where applications run much more quickly, saving engineering and design time, and additional servers might not be necessary.
The article outlines the basics of the technology and various flavors of IB gear you can find today, and offers some views on where and why it is has advantages over Ethernet. If you aren’t very familiar with InfiniBand, its a good read.
In comparing InfiniBand and Ethernet, states Voltaire’s Somekh, one of the most important parameters people should look at is network efficiency; what is the impact of the network on application efficiency? This simple metric, he believes, articulates everything about this alternative approach. With large data transfers, Ethernet consumes as much as 50 per cent of the CPU cycles; the average for InfiniBand is a loss of less than 10 to 20 per cent. So, while you might not have to pay more in hardware costs to implement an Ethernet network, for HPC you will spend longer running applications to get results, which means extra development and analysis time, or you might end up purchasing extra compute nodes to provide the horsepower.