Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


HDR InfiniBand Technology Reshapes the World of High-Performance and Machine Learning Platforms

“The recent announcement of HDR InfiniBand included the three required network elements to achieve full end-to-end implementation of the new technology: ConnectX-6 host channel adapters, Quantum switches and the LinkX family of 200Gb/s cables. The newest generations of InfiniBand bring the game changing capabilities of In-Network Computing and In-Network Memory to further enhance the new paradigm of Data-Centric data centers – for High-Performance Computing, Machine Learning, Cloud, Web2.0, Big Data, Financial Services and more – dramatically increasing network scalability and introducing new accelerations for storage platforms and data center security.”

NIH Powers Biowulf Cluster with Mellanox EDR 100Gb/s InfiniBand

Today Mellanox announced that NIH, the U.S. National Institute of Health’s Center for Information Technology, has selected Mellanox 100G EDR InfiniBand solutions to accelerate Biowulf, the largest data center at NIH. The project is a result of a collaborative effort between Mellanox, CSRA, Inc., DDN, and Hewlett Packard Enterprise. “The Biowulf cluster is NIH’s core HPC facility, with more than 55,000 cores. More than 600 users from 24 NIH institutes and centers will leverage the new supercomputer to enhance their computationally intensive research.”

New InfiniBand Architecture Specifications Extend Virtualization Support

“As performance demands continue to evolve in both HPC and enterprise cloud applications, the IBTA saw an increasing need for new enhancements to InfiniBand’s network capabilities, support features and overall interoperability,” said Bill Magro, co-chair of the IBTA Technical Working Group. “Our two new InfiniBand Architecture Specification updates satisfy these demands by delivering interoperability and testing upgrades for EDR and FDR, flexible management capabilities for optimal low-latency and low-power functionality and virtualization support for better network scalability.”

University of Tokyo Selects Mellanox EDR InfiniBand

Today Mellanox announced that the University of Tokyo has selected the company’s Switch-IB 2 EDR 100Gb/s InfiniBand Switches and ConnectX-4 adapters to accelerate its new supercomputer for computational science.

Mellanox Rolls Out EDR InfiniBand Routers

Today Mellanox announced a new line of InfiniBand router systems. The new EDR 100Gb/s InfiniBand Routers enable a new level of scalability critical for the next generation of mega data-center deployments as well as expanded capabilities for data center isolations between different users and applications. The network router delivers a consistent, high-performance and low latency router solution that is mission critical for high performance computing, cloud, Web 2.0, machine learning and enterprise applications.

IBTA Plugfest Expands EDR InfiniBand & RoCE Ecosystem

IBTA’s world-class compliance and interoperability program ensures the dependability of the evolving InfiniBand specification, which in turn broadens industry adoption and user confidence,” said Rupert Dance, co-chair of the IBTA Compliance and Interoperability Working Group (CIWG). “With the continued support of our members and partners, the IBTA is able to offer the industry invaluable resources to help guide critical decision making during deployment of InfiniBand or RoCE solutions.”

Rich Graham Presents: The Exascale Architecture

Rich Graham presented this talk at the Stanford HPC Conference. “Exascale levels of computing pose many system- and application- level computational challenges. Mellanox Technologies, Inc. as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”

VSC in Belgium Moves to EDR InfiniBand for Medical Research

Today Mellanox announced that the Flemish Supercomputer Center (VSC) in Belgium, has selected Mellanox’s end-to-end 100Gb/s EDR interconnect solutions to be integrated into a new LX-series supercomputer that is to be supplied by NEC Corporation. The system, which will be the fastest supercomputer (Tier-1) and the first complete end-to-end EDR 100Gb/s InfiniBand system in Belgium, is another example of the increasing global adoption of EDR InfiniBand technology.

Comparing FDR and EDR InfiniBand

Over at the Dell HPC Blog, Olumide Olusanya and Munira Hussain have posted an interesting comparison of FDR and EDR InfiniBand. “In the first post, we shared OSU Micro-Benchmarks (latency and bandwidth) and HPL performance between FDR and EDR Infiniband. In this part, we will further compare performance using additional real-world applications such as ANSYS Fluent, WRF, and NAS Parallel Benchmarks. In both blogs, we have shown several micro-benchmark and real-world application results to compare FDR with EDR Infiniband.”

KTH in Sweden Moves to EDR 100Gb/s InfiniBand

Today Mellanox announced its EDR 100Gb/s InfiniBand solutions have been selected by the KTH Royal Institute of Technology for use in their PDC Center for High Performance Computing. Mellanox’s robust and flexible EDR InfiniBand solution offers higher interconnect speed, lower latency and smart accelerations to maximize efficiency and will enable the PDC Center to achieve world-leading data center performance across a variety of applications, including advanced modeling for climate changes, brain functions and protein-drug interactions.