Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Mellanox Accelerates NVMe/TCP and RoCE Fabrics to 200Gb/s

Today Mellanox announced acceleration of NVMe/TCP at speeds up to 200Gb/s. The entire portfolio of shipping ConnectX adapters supports NVMe-oF over both TCP and RoCE, and the newly-introduced ConnectX-6 Dx and BlueField-2 products also secure NVMe-oF connections over IPsec and TLS using hardware-accelerated encryption and decryption. These Mellanox solutions empower cloud, telco and enterprise data […]

Report: Mellanox ConnectX Ethernet NICs Outperforming Competition

Today Mellanox announced that laboratory tests by The Tolly Group prove its ConnectX 25GE Ethernet adapter significantly outperforms the Broadcom NetXtreme E series adapter in terms of performance, scalability and efficiency. “Our testing shows that with RoCE, storage traffic, and DPDK, the Mellanox NIC outperformed the Broadcom NIC in throughput and efficient CPU utilization. ConnectX-5 also used ‘Zero-Touch RoCE’ to deliver high throughput even with partial and no congestion control, two scenarios where Broadcom declined to be tested.”

IBTA Celebrates 20 Years of Growth and Industry Success

“This year, the IBTA is celebrating 20 years of growth and success in delivering these widely used and valued technologies to the high-performance networking industry. Over the past two decades, the IBTA has provided the industry with technical specifications and educational resources that have advanced a wide range of high-performance platforms. InfiniBand and RoCE interconnects are deployed in the world’s fastest supercomputers and continue to significantly impact future-facing applications such as Machine Learning and AI.”

Video: Mellanox Rolls Out SmartNICs

In this video, Mellanox CTO Michael Kagan talks about the next step for SmartNICs and the company’s newly released ConnectX-6 Dx product driven by its own silicon. “The BlueField-2 IPU integrates all the advanced capabilities of ConnectX-6 Dx with an array of powerful Arm processor cores, high performance memory interfaces, and flexible processing capabilities in a single System-on-Chip (SoC), supporting both Ethernet and InfiniBand connectivity up to 200Gb/s.”

Video: Lustre, RoCE, and MAN

Marek Magryś from Cyfronet gave this talk at the DDN User Group. “This talk will describe the architecture and implementation of high capacity Lustre file system for the need of a data intensive project. Storage is based on DDN ES7700 building block and uses RDMA over Converged Ethernet as network transport. What is unusual is that the storage system is located over 10 kilometers away from the supercomputer. Challenges, performance benchmarks and tuning will be the main topic of the presentation.”

HPC Breaks Through to the Cloud: Why It Matters

In this special guest feature, Scot Schultz from Mellanox writes researchers are benefitting in a big way from HPC in the Cloud. “HPC has many different advantages depending on the specific use case, but one aspect that these implementations have in common is their use of RDMA-based fabrics to improve compute performance and reduce latency.”

The State of High-Performance Fabrics: A Chat with the OpenFabrics Alliance

In this special guest feature, Paul Grun and Doug Ledford from the OpenFabrics Alliance describe the industry trends in the fabrics space, its state of affairs and emerging applications. “Originally, ‘high-performance fabrics’ were associated with large, exotic HPC machines. But in the modern world, these fabrics, which are based on technologies designed to improve application efficiency, performance, and scalability, are becoming more and more common in the commercial sphere because of the increasing demands being placed on commercial systems.”

Designing HPC, Big Data, & Deep Learning Middleware for Exascale

DK Panda from Ohio State University presented this talk at the HPC Advisory Council Spain Conference. “This talk will focus on challenges in designing HPC, Big Data, and Deep Learning middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models. Features and sample performance numbers from MVAPICH2 libraries will be presented.”

HPC Network Stack on ARM

Pavel Shamis from ARM gave this talk at the MVAPICH User Group. “With the emerging availability HPC solutions based on ARM CPU architecture, it is important to understand how ARM integrates with the RDMA hardware and HPC network software stack. In this talk, we will overview ARM architecture and system software stack. We will discuss how ARM CPU interacts with network devices and accelerators. In addition, we will share our experience in enabling RDMA software stack and one-sided communication libraries (Open UCX, OpenSHMEM/SHMEM) on ARM and share preliminary evaluation results.”

OSC Hosts fifth MVAPICH Users Group

A broad array of system administrators, developers, researchers and students who share an interest in the MVAPICH open-source library for high performance computing will gather this week for the fifth meeting of the MVAPICH Users Group (MUG). “Dr. Panda’s library is a cornerstone for HPC machines around the world, including OSC’s systems and many of the Top 500,” said Dave Hudak, Ph.D., interim executive director of OSC. “We’ve gained a lot of insight and expertise from partnering with DK and his research group throughout the years.”