Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Mellanox Rocks the TOP500 with HDR InfiniBand at SC19

In this video, Gilad Shainer from Mellanox describes the company’s dominating  results on the TOP500 list of the world’s fastest supercomputers. “At SC19, Mellanox announced that 200 gigabit per second HDR InfiniBand accelerates 31% of the new 2019 InfiniBand systems on November’s TOP500 supercomputing list, demonstrating market demand for faster data speeds and smart interconnect technologies. Moreover, HDR InfiniBand connects the fastest TOP500 supercomputer built in 2019.”

Mellanox LongReach Appliance Extends InfiniBand Connectivity up to 40 Kilometers

Today Mellanox introduced the Mellanox Quantum LongReach series of long-distance InfiniBand switches. Mellanox Quantum LongReach systems provide the ability to seamlessly connect remote InfiniBand data centers together, or to provide high-speed and full RDMA (remote direct memory access) connectivity between remote compute and storage infrastructures. Based on the 200 gigabit HDR Mellanox Quantum InfiniBand switch, the LongReach solution provides up to two long-reach InfiniBand ports and eight local InfiniBand ports. The long reach ports can deliver up to 100Gb/s data throughput for distances of 10 and 40 kilometers.

Call for Sessions: OpenFabrics Alliance Workshop in March

The OpenFabrics Alliance (OFA) has published a Call for Sessions for its 16th annual OFA Workshop. “The OFA Workshop 2020 Call for Sessions encourages industry experts and thought leaders to help shape this year’s discussions by presenting or leading discussions on critical high-performance networking issues. Session proposals are being solicited in any area related to high performance networks and networking software, with a special emphasis on the topics for this year’s Workshop. In keeping with the Workshop’s emphasis on collaboration, proposals for Birds of a Feather sessions and panels are particularly encouraged.”

Building Oracle Cloud Infrastructure with Bare-Metal

In this video, Taylor Newill from Oracle describes how the Oracle Cloud Infrastructure delivers high performance for HPC applications. “From the beginning, Oracle built their bare-metal cloud with a simple goal in mind: deliver the same performance in the cloud that clients are seeing on-prem.”

Video: InfiniBand In-Network Computing Technology and Roadmap

Rich Graham from Mellanox gave this talk at the UK HPC Conference. “In-Network Computing transforms the data center interconnect to become a “distributed CPU”, and “distributed memory”, enables to overcome performance barriers and to enable faster and more scalable data analysis. HDR 200G InfiniBand In-Network Computing technology includes several elements – Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), smart Tag Matching and rendezvoused protocol, and more. This session will discuss the InfiniBand In-Network Computing technology and performance results, as well as view to future roadmap.”

Designing Scalable HPC, Deep Learning, Big Data, and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the UK HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, Big Data and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (Xeon, ARM and OpenPower), high-performance networks, and GPGPUs (including GPUDirect RDMA).”

Harvard Names New Lenovo HPC Cluster after Astronomer Annie Jump Cannon

Harvard has deployed a liquid-cooled supercomputer from Lenovo at it’s FASRC computing center. The system, named “Cannon” in honor of astronomer Annie Jump Cannon, is a large-scale HPC cluster supporting scientific modeling and simulation for thousands of Harvard researchers. “This new cluster will have 30,000 cores of Intel 8268 “Cascade Lake” processors. Each node will have 48 cores and 192 GB of RAM.”

IBTA Celebrates 20 Years of Growth and Industry Success

“This year, the IBTA is celebrating 20 years of growth and success in delivering these widely used and valued technologies to the high-performance networking industry. Over the past two decades, the IBTA has provided the industry with technical specifications and educational resources that have advanced a wide range of high-performance platforms. InfiniBand and RoCE interconnects are deployed in the world’s fastest supercomputers and continue to significantly impact future-facing applications such as Machine Learning and AI.”

A Performance Comparison of Different MPI Implementations on an ARM HPC System

Nicholas Brown from EPCC gave this talk at the MVAPICH User Group. “In this talk I will describe work we have done in exploring the performance properties of MVAPICH, OpenMPI and MPT on one of these systems, Fulhame, which is an HPE Apollo 70-based system with 64 nodes of Cavium ThunderX2 ARM processors and Mellanox InfiniBand interconnect. In order to take advantage of these systems most effectively, it is very important to understand the performance that different MPI implementations can provide and any further opportunities to optimize these.”

Overview of the MVAPICH Project and Future Roadmap

DK Panda gave this talk at the MVAPICH User Group. “This talk will provide an overview of the MVAPICH project (past, present, and future). Future roadmap and features for upcoming releases of the MVAPICH2 software family (including MVAPICH2-X and MVAPICH2-GDR) for HPC and Deep Learning will be presented. Features and releases for Microsoft Azure and Amazon AWS will also be presented. Current status and future plans for OSU INAM, OMB, and Best Practices Page will also be presented.”