Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Call for Sessions: Open Fabrics Workshop 2017 in Austin

The OpenFabrics Alliance Workshop 2017 has issued their Call for Sessions. The event will take place March 27-31, 2017 in Austin, Texas. “An ongoing collaboration between OpenFabrics Software (OFS) producers and users is necessary to address difficult network challenges. The 13th Annual OpenFabrics Alliance (OFA) Workshop is a key industry event encouraging a dialogue that is geared toward strengthening high-performance networks end-to-end and represents a joint effort among open source networking community members.”

NIH Powers Biowulf Cluster with Mellanox EDR 100Gb/s InfiniBand

Today Mellanox announced that NIH, the U.S. National Institute of Health’s Center for Information Technology, has selected Mellanox 100G EDR InfiniBand solutions to accelerate Biowulf, the largest data center at NIH. The project is a result of a collaborative effort between Mellanox, CSRA, Inc., DDN, and Hewlett Packard Enterprise. “The Biowulf cluster is NIH’s core HPC facility, with more than 55,000 cores. More than 600 users from 24 NIH institutes and centers will leverage the new supercomputer to enhance their computationally intensive research.”

New InfiniBand Architecture Specifications Extend Virtualization Support

“As performance demands continue to evolve in both HPC and enterprise cloud applications, the IBTA saw an increasing need for new enhancements to InfiniBand’s network capabilities, support features and overall interoperability,” said Bill Magro, co-chair of the IBTA Technical Working Group. “Our two new InfiniBand Architecture Specification updates satisfy these demands by delivering interoperability and testing upgrades for EDR and FDR, flexible management capabilities for optimal low-latency and low-power functionality and virtualization support for better network scalability.”

Why Intel Omni-Path is Growing Fast on the TOP500

In this video from SC16, Joe Yaworsky describes how Intel Omni Path is gaining traction on the TOP500. As the interconnect for the Intel Scalable System Framework, Omni-Path is focused on delivering the best possible application performance. “In the nine months since Intel Omni-Path Architecture (Intel OPA) began shipping, it has become the standard fabric for 100 gigabit systems. Intel OPA is featured in 28 of the top 500 most powerful supercomputers in the world announced at Supercomputing 2016 and now has 66 percent of the 100Gb market. Top500 designs include Oakforest-PACS, MIT Lincoln Lab and CINECA.”

What’s Next for HPC? A Q&A with Michael Kagan, CTO of Mellanox

As an HPC technology vendor, Mellanox is in the business of providing the leading-edge interconnects that drive many of the world’s fastest supercomputers. To learn more about what’s new for SC16, we caught up with Michael Kagan, CTO of Mellanox. “Moving InfiniBand beyond EDR to HDR is critical not only for HPC, but also for the numerous industries that are adopting AI and Big Data to make real business sense out the amount of data available and that we continue to collect on a daily basis.”

Mellanox Brings HDR to SC16 while Dominating Today’s TOP500

“InfiniBand’s advantages of highest performance, scalability and robustness enable users to maximize their data center return on investment. InfiniBand was chosen by far more end-users compared to a proprietary offering, resulting in a more than 85 percent market share. We are happy to see our open Ethernet adapter and switch solutions enable all of the 40G and the first 100G Ethernet systems on the TOP500 list, resulting in overall 194 systems using Mellanox for their compute and storage connectivity.”

Offloading vs. Onloading: The Case of CPU Utilization

One of the primary conversations these days in the field of networking is whether it is better to onload network functions onto the CPU or better to offload these functions to the interconnect hardware. “Onloading interconnect technology is easier to build, but the issue becomes the CPU utilization; because the CPU must manage and execute network operations, it has less availability for applications, which is its primary purpose.”

Slidecast: Mellanox Announces 200Gb/s HDR InfiniBand Solutions

In this slidecast, Gilad Shainer from Mellanox announces the world’s first HDR 200Gb/s data center interconnect solutions. “These 200Gb/s HDR InfiniBand solutions maintain Mellanox’s generation-ahead leadership while enabling customers and users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data center total cost of ownership. Mellanox 200Gb/s HDR solutions will become generally available in 2017.”

InfiniBand: When State-of-the-Art becomes State-of-the-Smart

Scot Schultz from Mellanox writes that the company is moving the industry forward to a world-class off-load network architecture that will pave the way to Exascale. “Mellanox, alongside many industry thought-leaders, is a leader in advancing the Co-Design approach. The key value and core goal is to strive for more CPU offload capabilities and acceleration techniques while maintaining forward and backward compatibility of new and existing infrastructures; and the result is nothing less than the world’s most advanced interconnect, which continues to yield the most powerful and efficient supercomputers ever deployed.”

It’s Here: The Print ‘n Fly Guide to SC16 in Salt Lake City

At insideHPC, are very pleased to publish the Print ‘n Fly Guide to SC16 in Salt Lake City. We designed this Guide to be an in-flight magazine custom tailored for your journey to SC16 — the world’s largest gathering of high performance computing professionals. “Inside this guide you will find technical features on supercomputing, HPC interconnects, and the latest developments on the road to exascale. It also has great recommendations on food, entertainment, and transportation in SLC.”