Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


High Performance Interconnects – Assessments, Rankings and Landscape

Dan Olds from OrionX.net presented this talk at the Switzerland HPC Conference. “Dan Olds will present recent research into the history of High Performance Interconnects (HPI), the current state of the HPI market, where HPIs are going in the future, and how customers should evaluate HPI options today. This will be a highly informative and interactive session.”

Panel Discussion: The Exascale Era

In this video from Switzerland HPC Conference, Rich Brueckner from insideHPC moderates a panel discussion on Exascale Computing. “The Exascale Computing Project in the USA is tasked with developing a set of advanced supercomputers with 50x better performance than today’s fastest machines on real applications. This panel discussion will look at the challenges, gaps, and probable pathways forward in this monumental endeavor.”

Panelists:

Gilad Shainer, HPC Advisory Council
Jeffrey Stuecheli, IBM
DK Panda, Ohio State University
Torsten Hoefler, ETH Zurich
Rich Graham, Mellanox

Video: InfiniBand Virtualization

“Infiniband Virtualization allows a single Channel Adapter to present multiple transport endpoints that share the same physical port. To software, these endpoints are exposed as independent Virtual HCAs (VHCAs), and thus may be assigned to different software entities, such as VMs. VHCAs are visible to Subnet Management, and are managed just like physical HCAs. We will cover the Virtualization model, management, addressing modes, and discuss deployment considerations.”

Experiences with NVMe over Fabrics

“Using RDMA, NVMe over Fabrics (NVMe-oF) provides the high BW and low-latency characteristics of NVMe to remote devices. Moreover, these performance traits are delivered with negligible CPU overhead as the bulk of the data transfer is conducted by RDMA. In this session, we present an overview of NVMe-oF and its implementation in Linux. We point out the main design choices and evaluate NVMe-oF performance for both Infiniband and RoCE fabrics.”

Accelerating Apache Spark with RDMA

Yuval Degani from Mellanox presented this talk at the OpenFabrics Workshop. “In this talk, we present a Java-based, RDMA network layer for Apache Spark. The implementation optimized both the RPC and the Shuffle mechanisms for RDMA. Initial benchmarking shows up to 25% improvement for Spark Applications.”

Dell Powers New Owens Cluster at Ohio State

Today the Ohio Supercomputer Center dedicated its newest, most powerful supercomputer: the Owens Cluster. The Dell cluster, named for the iconic Olympic champion Jesse Owens, delivers 1.5 petaflops of total peak performance. “OSC’s Owens Cluster represents one of the most significant HPC systems Dell has built,” said Tony Parkinson, Vice President for NA Enterprise Solutions and Alliances at Dell.

OFA Workshop in Austin to Put Spotlight on InfiniBand and RoCE

“The OpenFabrics Alliance (OFA) workshop is an annual event devoted to advancing the state of the art in networking. The workshop is known for showcasing a broad range of topics all related to network technology and deployment through an interactive, community-driven event. The comprehensive event includes a rich program made up of more than 50 sessions covering a variety of critical networking topics, which range from current deployments of RDMA to new and advanced network technologies.”

Mellanox ConnectX-5 Sets DPDK Performance Record with 100Gb/s Ethernet

“The I/O intensive nature of the Virtualized Network Functions (VNFs) including virtual Firewall, virtual Evolved Packet Core (vEPC), virtual Session Border Controller (vSBC), Anti-DDoS and Deep Packet Inspection (DPI) applications have posed significant challenges to build cost-effective NFV Infrastructures that meet packet rate, latency, jitter and security requirements. Leveraging its wealth of experience in building high-performance server/storage I/O components and switching systems for High Performance Computing, Hyperscale data centers, and telecommunications operators, Mellanox has the industry’s broadest range of intelligent Ethernet NIC and switch solutions; spanning interface speeds from 10, 25, 40, 50 to 100Gb/s.”

Tutorial on In-Network Computing: SHARP Technology for MPI Offloads

“Increased system size and a greater reliance on utilizing system parallelism to achieve computational needs, requires innovative system architectures to meet the simulation challenges. As a step towards a new network class of co-processors intelligent network devices, which manipulate data traversing the data-center network, SHARP technology designed to offload collective operation processing to the network. This tutorial will provide an overview of SHARP technology, integration with MPI, SHARP software components and live example of running MPI collectives.”

Mellanox Demos 4X Improvement in Crypto Performance with 40G Ethernet Network Adapter

Today Mellanox announced superior crypto throughput of line rate using the company’s Innova IPsec Network Adapter, demonstrating more than three times higher throughput and more than four times better CPU utilization when compared to x86 software-based server offerings. Mellanox’s Innova IPsec adapter provides seamless crypto capabilities and advanced network accelerations to modern data centers, thereby enabling the ubiquitous use of encryption across the network while sustaining unmatched performance, scalability and efficiency. By replacing software-based offerings, Innova can reduce data center expenses by 60 percent or more.