Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Intel Omni-Path Architecture: The Real Numbers

Joe Yaworski, Intel

In this slidecast, Joe Yaworski from Intel describes the Intel Omni-Path architecture and how it scales performance for a wide range of HPC applications. He also shows why recently published benchmarks have not reflected the real performance story.

On the November 2017 TOP500 list, Intel-powered supercomputers accounted for six of the top 10 systems and a record high of 471 out of 500 systems. Intel Omni-Path Architecture (Intel OPA) gained momentum, delivering a majority of the petaFLOPS of systems using 100Gb fabric delivering over 80 petaFLOPS, an almost 20 percent increase compared with the June 2017 Top500 list. In addition, Intel OPA now connects almost 60 percent of nodes using 100Gb fabrics on the Top500 list. Also, Intel powered all 137 new systems added to the November list.

As an element of Intel Scalable System Framework, Intel OPA delivers the performance for tomorrow’s high performance computing workloads and the ability to scale to tens of thousands of nodes—and eventually more—at a price competitive with today’s fabrics. The Intel OPA 100 Series product line is an end-to-end solution of PCIe adapters, silicon, switches, cables, and management software. As the successor to Intel True Scale Fabric, this optimized HPC fabric is built upon a combination of enhanced IP and Intel technology.

At SC17, Intel announced a new 48-port leaf module with a double density connector for dense Intel Omni-Path 100 series director switch solutions. With the 48-port module offering, the existing director switch chassis can now support 50 percent more ports, or up to 1152 ports in the 20U chassis. This helps reduce the number of switches, freeing up budget for more compute nodes, to help users run their HPC and deep learning jobs faster. The 48-port leaf module will be available early in 2018.

Download the MP3 * Subscribe on iTunes * Subscribe to RSS 

Check out our insideHPC Events Calendar

Leave a Comment

*

Resource Links: