Why Intel Omni-Path is Growing Fast on the TOP500

Joe Yaworski, Intel

Joe Yaworski, VP Intel Architecture Group & GM Technical Computing at Intel

In this video from SC16, Joe Yaworsky describes how Intel Omni Path is gaining traction on the TOP500. As the interconnect for the Intel Scalable System Framework, Omni-Path is focused on delivering the best possible application performance.

“In the nine months since Intel Omni-Path Architecture (Intel OPA) began shipping, it has become the standard fabric for 100 gigabit (Gb) systems. Intel OPA is featured in 28 of the top 500 most powerful supercomputers in the world announced at Supercomputing 2016 and now has 66 percent of the 100Gb market. Top500 designs include Oakforest-PACS, MIT Lincoln Lab and CINECA.”

Highlights include:

  • With 28 clusters in the November 2016 TOP500 list, Intel OPA has twice the number of InfiniBand* EDR systems and now accounts for around 66 percent of all 100GB systems. Additionally, two systems are ranked in the Top 15: Oakforest-PACS is ranked sixth with 8,208 nodes and CINECA is ranked 12th with 3556 nodes. The Intel OPA systems on the list add up to total floating-point operations per second (FLOPS) of 43.7 petaflops (Rmax), or 2.5 times the FLOPS of all InfiniBand* EDR systems.
  • Intel OPA has seen rapid market adoption in the nine months it has been shipping broadly, driven by clear customer benefits such as high performance, price-performance and innovative fabric features, such as error detection and correction without additional latency.
  • Intel OPA is an end-to-end fabric solution that improves HPC workloads for clusters of all sizes, achieving up to 9 percent higher application performance and up to 37 percent lower fabric costs on average compared to InfiniBand EDR.

Major installations for Intel OPA include the University of Tokyo and Tsukuba University (JCAHPC), Texas Tech University, the University of Washington, the University of Colorado Boulder, MIT Lincoln Lab, and Met Malaysia. There are now well over 100 successfully-deployed Intel OPA clusters and most adoption is a result of competitive benchmarking and leadership price-performance.

See our complete coverage of SC16

Sign up for our insideHPC Newsletter

Comments

  1. Nigel Williams says

    “…100 gigabyte (GB) system” should be 100 gigabit (Gb) note the lowercase ‘b’…units really matter in HPC, good habit is to write Gbit to avoid confusion.