Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


GIGABYTE Steps up with a Broad Array of Server Offerings for AI & HPC

In this video from SC19, Peter Hanley from GIGABYTE describes how the company delivers a full range of server solutions for HPC, AI, and the Edge. “GIGABYTE is an industry leader in HPC, delivering systems with the highest GPU density combined with excellent cooling performance, power efficiency and superior networking flexibility. These systems can provide massive parallel computing capabilities to power your next AI breakthrough.”

Call for Sessions: OpenFabrics Alliance Workshop in March

The OpenFabrics Alliance (OFA) has published a Call for Sessions for its 16th annual OFA Workshop. “The OFA Workshop 2020 Call for Sessions encourages industry experts and thought leaders to help shape this year’s discussions by presenting or leading discussions on critical high-performance networking issues. Session proposals are being solicited in any area related to high performance networks and networking software, with a special emphasis on the topics for this year’s Workshop. In keeping with the Workshop’s emphasis on collaboration, proposals for Birds of a Feather sessions and panels are particularly encouraged.”

Designing Scalable HPC, Deep Learning, Big Data, and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the UK HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, Big Data and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (Xeon, ARM and OpenPower), high-performance networks, and GPGPUs (including GPUDirect RDMA).”

Mellanox Accelerates NVMe/TCP and RoCE Fabrics to 200Gb/s

Today Mellanox announced acceleration of NVMe/TCP at speeds up to 200Gb/s. The entire portfolio of shipping ConnectX adapters supports NVMe-oF over both TCP and RoCE, and the newly-introduced ConnectX-6 Dx and BlueField-2 products also secure NVMe-oF connections over IPsec and TLS using hardware-accelerated encryption and decryption. These Mellanox solutions empower cloud, telco and enterprise data […]

Report: Mellanox ConnectX Ethernet NICs Outperforming Competition

Today Mellanox announced that laboratory tests by The Tolly Group prove its ConnectX 25GE Ethernet adapter significantly outperforms the Broadcom NetXtreme E series adapter in terms of performance, scalability and efficiency. “Our testing shows that with RoCE, storage traffic, and DPDK, the Mellanox NIC outperformed the Broadcom NIC in throughput and efficient CPU utilization. ConnectX-5 also used ‘Zero-Touch RoCE’ to deliver high throughput even with partial and no congestion control, two scenarios where Broadcom declined to be tested.”

IBTA Celebrates 20 Years of Growth and Industry Success

“This year, the IBTA is celebrating 20 years of growth and success in delivering these widely used and valued technologies to the high-performance networking industry. Over the past two decades, the IBTA has provided the industry with technical specifications and educational resources that have advanced a wide range of high-performance platforms. InfiniBand and RoCE interconnects are deployed in the world’s fastest supercomputers and continue to significantly impact future-facing applications such as Machine Learning and AI.”

Video: Mellanox Rolls Out SmartNICs

In this video, Mellanox CTO Michael Kagan talks about the next step for SmartNICs and the company’s newly released ConnectX-6 Dx product driven by its own silicon. “The BlueField-2 IPU integrates all the advanced capabilities of ConnectX-6 Dx with an array of powerful Arm processor cores, high performance memory interfaces, and flexible processing capabilities in a single System-on-Chip (SoC), supporting both Ethernet and InfiniBand connectivity up to 200Gb/s.”

Video: Lustre, RoCE, and MAN

Marek Magryś from Cyfronet gave this talk at the DDN User Group. “This talk will describe the architecture and implementation of high capacity Lustre file system for the need of a data intensive project. Storage is based on DDN ES7700 building block and uses RDMA over Converged Ethernet as network transport. What is unusual is that the storage system is located over 10 kilometers away from the supercomputer. Challenges, performance benchmarks and tuning will be the main topic of the presentation.”

HPC Breaks Through to the Cloud: Why It Matters

In this special guest feature, Scot Schultz from Mellanox writes researchers are benefitting in a big way from HPC in the Cloud. “HPC has many different advantages depending on the specific use case, but one aspect that these implementations have in common is their use of RDMA-based fabrics to improve compute performance and reduce latency.”

The State of High-Performance Fabrics: A Chat with the OpenFabrics Alliance

In this special guest feature, Paul Grun and Doug Ledford from the OpenFabrics Alliance describe the industry trends in the fabrics space, its state of affairs and emerging applications. “Originally, ‘high-performance fabrics’ were associated with large, exotic HPC machines. But in the modern world, these fabrics, which are based on technologies designed to improve application efficiency, performance, and scalability, are becoming more and more common in the commercial sphere because of the increasing demands being placed on commercial systems.”