“InfiniBand’s advantages of highest performance, scalability and robustness enable users to maximize their data center return on investment. InfiniBand was chosen by far more end-users compared to a proprietary offering, resulting in a more than 85 percent market share. We are happy to see our open Ethernet adapter and switch solutions enable all of the 40G and the first 100G Ethernet systems on the TOP500 list, resulting in overall 194 systems using Mellanox for their compute and storage connectivity.”
One of the primary conversations these days in the field of networking is whether it is better to onload network functions onto the CPU or better to offload these functions to the interconnect hardware. “Onloading interconnect technology is easier to build, but the issue becomes the CPU utilization; because the CPU must manage and execute network operations, it has less availability for applications, which is its primary purpose.”
In this slidecast, Gilad Shainer from Mellanox announces the world’s first HDR 200Gb/s data center interconnect solutions. “These 200Gb/s HDR InfiniBand solutions maintain Mellanox’s generation-ahead leadership while enabling customers and users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data center total cost of ownership. Mellanox 200Gb/s HDR solutions will become generally available in 2017.”
Scot Schultz from Mellanox writes that the company is moving the industry forward to a world-class off-load network architecture that will pave the way to Exascale. “Mellanox, alongside many industry thought-leaders, is a leader in advancing the Co-Design approach. The key value and core goal is to strive for more CPU offload capabilities and acceleration techniques while maintaining forward and backward compatibility of new and existing infrastructures; and the result is nothing less than the world’s most advanced interconnect, which continues to yield the most powerful and efficient supercomputers ever deployed.”
At insideHPC, are very pleased to publish the Print ‘n Fly Guide to SC16 in Salt Lake City. We designed this Guide to be an in-flight magazine custom tailored for your journey to SC16 — the world’s largest gathering of high performance computing professionals. “Inside this guide you will find technical features on supercomputing, HPC interconnects, and the latest developments on the road to exascale. It also has great recommendations on food, entertainment, and transportation in SLC.”
“We are pleased to start shipping the ConnectX-5, the industry’s most advanced network adapter, to our key partners and customers, allowing them to leverage our smart network architecture to overcome performance limitations and to gain a competitive advantage,” said Eyal Waldman, Mellanox president and CEO. “ConnectX-5 enables our customers and partners to achieve higher performance, scalability and efficiency of their InfiniBand or Ethernet server and storage platforms. Our interconnect solutions, when combined with Intel, IBM, NVIDIA or ARM CPUs, allow users across the world to achieve significant better return on investment from their IT infrastructure.”
The HPC Advisory Council has posted their agenda for their upcoming China Conference. The event takes place Oct. 26 in Xi’an, China. “We invite you to join us on Wednesday, October 26th, in Xi’an for our annual China Conference. This year’s agenda will focus on Deep learning, Artificial Intelligence, HPC productivity, advanced topics and futures. Join fellow technologists, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High Performance Computing.”
In this video from the HPC Advisory Council Spain Conference, Dan Olds from OrionX discusses the High Performance Interconnect (HPI) market landscape, plus provides ratings and rankings of HPI choices today. “In this talk, we’ll take a look at the technologies and performance of high-end networking technology and the coming battle between onloading vs. offloading interconnect architectures.”
The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.
Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.