MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


IBTA Publishes RoCE Interoperability List from Plugfest

RoCE

Today the InfiniBand Trade Association (IBTA) announced the completion of the first Plugfest for RDMA over Converged Ethernet (RoCE) solutions and the publication of the RoCE Interoperability List on the IBTA website. Fifteen member companies participated, bringing their RoCE adapters, cables and switches for testing to the event. Products that successfully passed the testing have been added to the RoCE Interoperability List.

KTH in Sweden Moves to EDR 100Gb/s InfiniBand

2392063474_7bddae5d8b_b1-625x469

Today Mellanox announced its EDR 100Gb/s InfiniBand solutions have been selected by the KTH Royal Institute of Technology for use in their PDC Center for High Performance Computing. Mellanox’s robust and flexible EDR InfiniBand solution offers higher interconnect speed, lower latency and smart accelerations to maximize efficiency and will enable the PDC Center to achieve world-leading data center performance across a variety of applications, including advanced modeling for climate changes, brain functions and protein-drug interactions.

Interview: Hot Interconnects Conference Returns to Santa Clara Aug 26-28

hoti

“This year we have a specific focus on the latest advances in different areas of networks, showcasing some of the latest and greatest next generation networking hardware. Our focus on Data Center vs. HPC networks will allow for an exchange that will benefit both communities.”

Eurotech QPACE2 Supercomputer Ranks 379 on TOP500

eurotech

Last week at ISC 2015, Eurotech announced that the company has completed the installation of a new supercomputer prototype at the University of Regensburg. With 15,872 compute cores, the QPACE2 supercomputer is ranked #370 on the June 15 TOP500 list.

New UCX Network Communication Framework for Next-Gen Programming Models

UCX

UCX is a collaboration between industry, laboratories, and academia to create an open-source production grade communication framework for HPC applications. “The path to Exascale, in addition to many other challenges, requires programming models where communications and computations unfold together, collaborating instead of competing for the underlying resources. In such an environment, providing holistic access to the hardware is a major component of any programming model or communication library. With UCX, we have the opportunity to provide not only a vehicle for production quality software, but also a low-level research infrastructure for more flexible and portable support for the Exascale-ready programming models.”

Video: RDMA Container Support

liran

In this video from the 2015 OFS Workshop, Liran Liss from Mellanox presents: RDMA Container Support.

IBTA Launches the RoCE Initiative

ibta

Today the InfiniBand Trade Association (IBTA) announced the launch of the RoCE Initiative to further the advancement of RDMA over Converged Ethernet (RoCE) technology and promote RoCE awareness.

UPC and OpenSHMEM PGAS Models on GPU Clusters

DK Panda, Ohio State University

“Learn about extensions that enable efficient use of Partitioned Global Address Space (PGAS) Models like OpenSHMEM and UPC on supercomputing clusters with NVIDIA GPUs. PGAS models are gaining attention for providing shared memory abstractions that make it easy to develop applications with dynamic and irregular communication patterns. However, the existing UPC and OpenSHMEM standards do not allow communication calls to be made directly on GPU device memory. This talk discusses simple extensions to the OpenSHMEM and UPC models to address this issue.”

Achieving Near-Native GPU Performance in the Cloud

John Paul Walters

“In this session we describe how GPUs can be used within virtual environments with near-native performance. We begin by showing GPU performance across four hypervisors: VMWare ESXi, KVM, Xen, and LXC. After showing that performance characteristics of each platform, we extend the results to the multi-node case with nodes interconnected by QDR InfiniBand. We demonstrate multi-node GPU performance using GPUDirect-enabled MPI, achieving efficiencies of 97-99% of a non-virtualized system.”

SGI Powers Earthquake Research in Japan

imgres

Today SGI announced that the Earthquake and Volcano Information Center of the Earthquake Research Institute (ERI) at the University of Tokyo, has deployed a large-scale parallel computing solution from SGI for leading-edge seismological and volcanological research.