Sign up for our newsletter and get the latest HPC news and analysis.


Comparing OpenACC and OpenMP Performance and Programmability

larkin

“OpenACC and OpenMP provide programmers with two good options for portable, high-level parallel programming for GPUs. This talk will discuss similarities and differences between the two specifications in terms of programmability, portability, and performance.”

UPC and OpenSHMEM PGAS Models on GPU Clusters

DK Panda, Ohio State University

“Learn about extensions that enable efficient use of Partitioned Global Address Space (PGAS) Models like OpenSHMEM and UPC on supercomputing clusters with NVIDIA GPUs. PGAS models are gaining attention for providing shared memory abstractions that make it easy to develop applications with dynamic and irregular communication patterns. However, the existing UPC and OpenSHMEM standards do not allow communication calls to be made directly on GPU device memory. This talk discusses simple extensions to the OpenSHMEM and UPC models to address this issue.”

Deep Learning at Scale

imgres

“We present a state-of-the-art image recognition system, Deep Image, developed using end-to-end deep learning. The key components are a custom-built supercomputer dedicated to deep learning, a highly optimized parallel algorithm using new strategies for data partitioning and communication, larger deep neural network models, novel data augmentation approaches, and usage of multi-scale high-resolution images.”

Video: Accelerating OpenPOWER Using NVM Express SSDs and CAPI

openpower

“We present results for a platform consisting of an NVM Express SSD, a CAPI accelerator card and a software stack running on a Power8 system. We show how the threading of the Power8 CPU can be used to move data from the SSD to the CAPI card at very high speeds and implement accelerator functions inside the CAPI card that can process the data at these speeds.”

Video: Enabling OpenACC Performance Analysis

openacc-logo-v4

Learn how OpenACC runtimes also exposes performance-related information revealing where your OpenACC applications are wasting clock cycles. The talk will show that profilers can connect with OpenACC applications to record how much time is spent in OpenACC regions and what device activity it turns into.

Achieving Near-Native GPU Performance in the Cloud

John Paul Walters

“In this session we describe how GPUs can be used within virtual environments with near-native performance. We begin by showing GPU performance across four hypervisors: VMWare ESXi, KVM, Xen, and LXC. After showing that performance characteristics of each platform, we extend the results to the multi-node case with nodes interconnected by QDR InfiniBand. We demonstrate multi-node GPU performance using GPUDirect-enabled MPI, achieving efficiencies of 97-99% of a non-virtualized system.”

Simulating Global Atmosphere with NICAM on TSUBAME2.5 Using OpenACC

openacc

“OpenACC was applied to the a global high-resolution atmosphere model named NICAM. We executed the dynamical core test without re-writing any specific kernel subroutines for GPU execution. Only 5% of the lines of source code were modified, demonstrating good portability. The results showed that the kernels generated by OpenACC achieved good performance, which was appropriate to the memory performance of GPU, as well as weak scalability. A large-scale simulation was carried out using 2560 GPUs, which achieved 60 TFLOPS.”

Attacking HIV with Titan and Blue Waters

hiv

“The highly parallel molecular dynamics code NAMD was was one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007, and is now used to perform petascale biomolecular simulations, including a 64-million-atom model of the HIV virus capsid, on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines.”

E4-ARKA: ARM64+GPU+IB is Now Here

e4

“E4 Computer Engineering has introduced ARKA, the first server solution based on ARM 64 bit SoC dedicated to HPC. The compute node is boosted by discrete GPU NVIDIA cards K20 with 10Gb ethernet and FDR InfiniBand networks implemented by default. In this presentation, the hardware configuration of the compute node is described in detail. The unique capabilities of the ARM+GPU+IB combination are described, including many synthetic benchmarks and application tests with particular attention to molecular dynamics software.”

Video: HPC Solution Stack on OpenPOWER

openpower

“This demo will show the capability of IBM OpenPOWER that can be the foundation of the complicated High Performance Computing complete solution. From the HPC cluster deployment, job scheduling, system management, application management to the science computing workloads on top of them, all these components can be well constructed on top of IBM OpenPOWER platform with good usability and performance. Also this demo shows the simplicity of migrating a complete x86 based HPC stack to the OpenPOWER platform.”