High Performance System Interconnect Technology

Print Friendly, PDF & Email

This is the second in our series of features covering high performance system interconnect technology on high performance networking and computing. This series, compiled in a complete Guide available here, is focused on today’s top trends in HPC networking. The series also covers available trends in high performance computing and networking, as well as how to select an HPC technology and solution partner. 

Available High Performance System Interconnect Technology

Today, high performance system interconnect technology can be divided into three categories: Ethernet, InfiniBand, and vendor specific interconnects, which includes custom interconnects the recently introduced Intel Omni-Path technology.

Ethernet

Ethernet is established as the dominant low level interconnect standard for mainstream commercial computing requirements. Above the physical level, the software layers to coordinate communication resulted in TCP/IP becoming widely adopted as the primary commercial networking protocol. Ethernet has continued to evolve, driving specifications to ever better performance levels from the initial 3 Mbps to 100 Gbps currently, with 400 Gbps expected in 2017. Based upon its ubiquity and continuing development, Ethernet is clearly the dominant network for mainstream computing needs where a physical connection is required. When it fits, it is often the best option, but for high bandwidth and low latency deployments, alternative and better options have emerged.

InfiniBand

InfiniBand originated in 1999 to specifically address workload requirements that were not adequately addressed by Ethernet, and interoperability requirements that the then-current proprietary technologies were unable to meet. The initial specification released in 2000 by the InfiniBand Trade Association (IBTA) led to today’s InfiniBand standard that currently leads in high bandwidth and low-latency, and co-exists with Ethernet.

The InfiniBand protocol stack is considered less burdensome than the TCP protocol required for Ethernet.

InfiniBand is designed for scalability, using a switched fabric network topology together with remote direct memory access (RDMA) to reduce CPU overhead. The InfiniBand protocol stack is considered less burdensome than the TCP protocol required for Ethernet. This enables InfiniBand to maintain a performance and latency edge in comparison to Ethernet in many high performance workloads. The IBTA roadmap shows bandwidth for HDR InfiniBand reaching 600 Gbps by 2017.

Efforts to implement RDMA over Converged Ethernet (RoCE), Internet Wide Area RDMA Protocol (iWARP) and other initiatives are narrowing the gap between Ethernet and InfiniBand performance, but InfiniBand still maintains the advantage for the most demanding parallel workloads.  In terms of performance, latency is a critical metric. Ethernet roundtrip TCP or UDP latencies can be as low as 3 microseconds, while InfiniBand latencies can be significantly below 1 microsecond. Reported latency with RoCE has achieved 1.3 microseconds, while EDR InfiniBand tests have reported application latency of 610 nanoseconds.

Although InfiniBand is backed by a standards organization, with formal and open multi-vendor processes, the InfiniBand market is currently dominated by a single significant vendor.  However, other major vendors are able to design and bring their own InfiniBand compliant products to market.

Vendor Specific Interconnects

Vendor specific technologies frequently have a time to market (and therefore performance) advantage over standardized offerings. For example, the fastest TOP500 systems usually include a healthy proportion of systems built with vendor specific interconnects. Currently, vendor specific interconnects are concentrated in the TOP50 and dominate the TOP10.

In recent years, the most common of the vendor specific interconnects have been the IBM Blue Gene and Cray Aries interconnects deployed in combination with InfiniBand, Ethernet or Fibre Channel (FC) for connection to storage systems.

There is some significant change occurring in the vendor specific landscape, with older companies being acquired or being superseded by InfiniBand and other new technologies which adds complexity to the market. The most significant of these acquisitions have been made by Intel which acquired QLogic’s InfiniBand assets as well as Cray’s Gemini and Aries interconnect technologies. These acquisitions accelerated, and formed the foundation of Intel’s Omni-Path strategy to enter the HPI market. By acquiring the assets of some of the best interconnect capabilities available Intel has positioned itself to make a credible bid for the leadership position.

Introduced in 2015, Intel’s end-to-end Omni-Path Architecture (OPA) targets the InfiniBand market, claiming higher messaging rates and lower latency in addition to advanced features such as traffic flow optimization, packet integrity protection and dynamic lane scaling. Intel OPA is a cornerstone of the company’s strategy to take an integrated and coherent approach to system architecture to advance HPC workload performance. Intel OPA follows the go to market approach of Intel’s x86 processor architecture. It is not a formal standard, so the specification remains under Intel’s control, but it is positioned as a ‘de-facto’ standard – a core technology brought to market by multiple vendors – so falling in-between completely closed proprietary offerings and formal, open standards.

Over the next few weeks we will dive into the following additional topics on HPC Networking:

If you prefer you can download the complete report, A Trusted Approach for High Performance Networking, courtesy of Dell EMC and Intel.