Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ExaComm 2017 Workshop at ISC High Performance posts Full Agenda

The ExaComm 2017 Workshop at ISC High Performance has posted its Full Agenda. As the Third International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale, the one day workshop takes place at the Frankfurt Marriott Hotel on Thursday, June 22. “The objectives of this workshop will be to share the experiences of the members of this community and to learn the opportunities and challenges in the design trends for exascale communication architectures.”

PSSC Labs Launches Eco Blades for HPC

The Eco Blade is a unique server platform engineered specifically for high performance, high density computing environments – simultaneously increasing compute density while decreasing power use. Eco Blade offers two complete, independent servers within 1U of rack space. Each independent server supports up to 64 Intel Xeon processor cores and 1.0 TB of enterprise memory for a total of up to 128 Cores and 2 TB of memory per 1U.

Panel Discussion on Disruptive Technologies for HPC

In this video from the HPC User Forum, Bob Sorensen from Hyperion Research moderates a panel discussion on Disruptive Technologies for HPC. “A disruptive innovation is an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market leading firms, products and alliances. The term was defined and phenomenon analyzed by Clayton M. Christensen beginning in 1995.”

Mellanox InfiniBand Delivers up to 250 Percent Higher ROI for HPC

Today Mellanox announced that EDR 100Gb/s InfiniBand solutions have demonstrated from 30 to 250 percent higher HPC applications performance versus Omni-Path. These performance tests were conducted at end-user installations and Mellanox benchmarking and research center, and covered a variety of HPC application segments including automotive, climate research, chemistry, bioscience, genomics and more.

High Performance Interconnects – Assessments, Rankings and Landscape

Dan Olds from OrionX.net presented this talk at the Switzerland HPC Conference. “Dan Olds will present recent research into the history of High Performance Interconnects (HPI), the current state of the HPI market, where HPIs are going in the future, and how customers should evaluate HPI options today. This will be a highly informative and interactive session.”

Panel Discussion: The Exascale Era

In this video from Switzerland HPC Conference, Rich Brueckner from insideHPC moderates a panel discussion on Exascale Computing. “The Exascale Computing Project in the USA is tasked with developing a set of advanced supercomputers with 50x better performance than today’s fastest machines on real applications. This panel discussion will look at the challenges, gaps, and probable pathways forward in this monumental endeavor.”

Panelists:

Gilad Shainer, HPC Advisory Council
Jeffrey Stuecheli, IBM
DK Panda, Ohio State University
Torsten Hoefler, ETH Zurich
Rich Graham, Mellanox

Video: InfiniBand Virtualization

“Infiniband Virtualization allows a single Channel Adapter to present multiple transport endpoints that share the same physical port. To software, these endpoints are exposed as independent Virtual HCAs (VHCAs), and thus may be assigned to different software entities, such as VMs. VHCAs are visible to Subnet Management, and are managed just like physical HCAs. We will cover the Virtualization model, management, addressing modes, and discuss deployment considerations.”

Experiences with NVMe over Fabrics

“Using RDMA, NVMe over Fabrics (NVMe-oF) provides the high BW and low-latency characteristics of NVMe to remote devices. Moreover, these performance traits are delivered with negligible CPU overhead as the bulk of the data transfer is conducted by RDMA. In this session, we present an overview of NVMe-oF and its implementation in Linux. We point out the main design choices and evaluate NVMe-oF performance for both Infiniband and RoCE fabrics.”

Accelerating Apache Spark with RDMA

Yuval Degani from Mellanox presented this talk at the OpenFabrics Workshop. “In this talk, we present a Java-based, RDMA network layer for Apache Spark. The implementation optimized both the RPC and the Shuffle mechanisms for RDMA. Initial benchmarking shows up to 25% improvement for Spark Applications.”

Dell Powers New Owens Cluster at Ohio State

Today the Ohio Supercomputer Center dedicated its newest, most powerful supercomputer: the Owens Cluster. The Dell cluster, named for the iconic Olympic champion Jesse Owens, delivers 1.5 petaflops of total peak performance. “OSC’s Owens Cluster represents one of the most significant HPC systems Dell has built,” said Tony Parkinson, Vice President for NA Enterprise Solutions and Alliances at Dell.