Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NVMe Over Fabrics High performance SSDs Networked for Composable Infrastructure

Rob Davis from Mellanox gave this talk at the 2018 OCP Summit. “There is a new very high performance open source SSD interfaced called NVMe over Fabrics now available to expand the capabilities of networked storage solutions. It is an extension of the local NVMe SSD interface developed a few years ago driven by the need for a faster interface for SSDs. Similar to the way native disk drive SCSI protocol was networked with Fibre Channel 20 years ago, this technology enables NVMe SSDs to be networked and shared with their native protocol. By utilizes ultra-low latency RDMA technology to achieve data sharing across a network without sacrificing the local performance characteristics of NVMe SSDs, true composable infrastructure is now possible.”

Preliminary Agenda Posted for HPC Advisory Council Swiss Conference

The HPC Advisory Council has posted their meeting agenda for their Swiss Conference. Held in conjunction with HPCXXL, the event takes place April 9-12 in Lugano, Switzerland. “Delve into a wide range of interests, disciplines and topics in HPC – from present day application to its future potential. Join the Centro Svizzero di Calcolo Scientifico (CSCS), HPC Advisory Council members and colleagues from around the world for invited and contributed talks and immersive tutorials at the ninth annual Swiss Conference! Knowledgeable evaluations, prescriptive best practices and provocative insights, the open forum conference brings together industry experts for three days of highly interactive sessions.”

Sharing High-Performance Interconnects Across Multiple Virtual Machines

Mohan Potheri from VMware gave this talk at the Stanford HPC Conference. “Virtualized devices offer maximum flexibility. This session introduces SR-IOV, explains how it is enabled in VMware vSphere, and provides details of specific use cases that important for machine learning and high-performance computing. It includes performance comparisons that demonstrate the benefits of SR-IOV and information on how to configure and tune these configurations.”

HDR 200G InfiniBand: Empowering Next Generation Data Centers

The need for faster data movement has never been more critical to the worlds of HPC and machine learning. In light of this demand, companies like Mellanox Technologies are working to introduce solutions to address the need for HPC and deep learning platforms to move and analyze data both in real-time and at faster speeds than ever.Download the new white paper from Mellanox that explores the company’s end-to-end HDR 200G InfiniBand product portfolio and the benefits of in-network computing.

Highest Peformance and Scalability for HPC and AI

Scot Schultz from Mellanox gave this talk at the Stanford HPC Conference. “Today, many agree that the next wave of disruptive technology blurring the lines between the digital, physical and even the biological, will be the fourth industrial revolution of AI. The fusion of state-of-the-art computational capabilities, extensive automation and extreme connectivity is already affecting nearly every aspect of society, driving global economics and extending into every aspect of our daily life.”

Lenovo ThinkSystem Servers Power 1.3 Petaflop Supercomputer at University of Southampton

OCF in the UK has deployed a new supercomputer at the University of Southampton. Named Iridis 5, the 1.3 Petaflop system will support research demanding traditional HPC as well as projects requiring large scale deep storage, big data analytics, web platforms for bioinformatics, and AI services. “We’ve had early access to Iridis 5 and it’s substantially bigger and faster than its previous iteration – it’s well ahead of any other in use at any University across the UK for the types of calculations we’re doing.”

Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss the challenges in designing runtime environments for MPI+X (PGAS-OpenSHMEM/UPC/CAF/UPC++, OpenMP and Cuda) programming models by taking into account support for multi-core systems (KNL and OpenPower), high networks, GPGPUs (including GPUDirect RDMA) and energy awareness.”

Introducing 200G HDR InfiniBand Solutions

As the first to 40Gb/s, 56Gb/s and 100Gb/s bandwidth, Mellanox has both boosted data center and cloud performance and improved return on investment at a pace that exceeds its own roadmap. To that end, Mellanox has now announced that it is the first company to enable 200Gb/s data speeds with Mellanox Quantum switches, ConnectX-6 adapters, and LinkX cables combining for an end-to end 200G HDR InfiniBand solution in 2018. Download the new report, courtesy of Mellanox Technologies, to lean more about 200G HDR InfiniBand solutions. 

OpenFabrics Alliance Workshop 2018 – An Emphasis on Fabric Community Collaboration

In this special guest feature, Parks Fields and Paul Grun from the OpenFabrics Alliance write that the upcoming OFA Workshop in Boulder is an excellent opportunity to collaborate on the next generation of network fabrics. “Come join the community in Boulder this year to lend your voice to shaping the direction of fabric technology in big ways or small, or perhaps just to listen and learn about the latest trends coming down the pike, or to pick up tips and tricks to make you more effective in your daily job.”

2018 Technology Trends from Mellanox CTO Michael Kagan

In this video, Mellanox CTO Michael Kagan offers his view of technology trends for 2018. “Mellanox is looking forward to continued to Technology and Product Leadership in 2018. As the leader in End-to-End InfiniBand and Ethernet Technologies, Mellanox will introduce new products (Switch Systems/Silicon, Acq. EZchip Technology) to accelerate future growth. The company is also positioned to benefit from market transition from 10Gb to 25/50/100Gb.”