Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


GTC to Feature 90 Sessions on HPC and Supercomputing

Accelerated computing continues to gain momentum. This year the GPU Technology Conference will feature 90 sessions on HPC and Supercomputing. The event takes place May 8-11 in San Jose, California.

The HPC and Supercomputing Track Sessions will focus on how computational and data science are used to solve traditional HPC problems in healthcare, weather, astronomy, and other domains. GPU developers can also connect with innovators and researchers as they share their groundbreaking work using GPU computing.

Featured talks include:

DK Panda, Ohio State University

Pushing the Frontier of HPC and Deep Learning. Explore new developments in the MVAPICH2-GDR library that help MPI developers realize maximum performance and scalability on HPC clusters with NVIDIA GPUs. See how multiple designs focusing on GPUDirect RDMA(GDR)_Async, non-blocking collectives, support for unified memory, and datatype processing boost HPC application performance. We’ll target emerging deep learning frameworks with novel designs and enhancements to this library to accommodate the large message and dense GPU-computing requirements of the deep learning frameworks. We’ll also present OSU-Caffe—which supports an MPI-based distributed and scalable DL framework—as well as its performance and scalability.

Jiri Kraus, Nvidia

Multi-GPU Programming with MPI. Learn how to program multi-GPU systems or GPU clusters using the Message Passing Interface (MPI) and OpenACC or NVIDIA CUDA. We’ll start with a quick introduction to MPI and how it can be combined with OpenACC or NVIDIA CUDA. Then, we’ll cover advanced topics like CUDA-aware MPI and how to overlap communication with computation to hide communication times. We’ll also cover the latest improvements with CUDA-aware MPI, interaction with Unified Memory, the multi process service (MPS aka Hyper-Q for MPI), and MPI support in the NVIDIA performance analysis tools.

James Phillips, University of Illinois

Petascale Molecular Dynamics Simulations from Titan to Summit. The highly parallel molecular dynamics code NAMD is used on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines to perform petascale biomolecular simulations, including a 64-million-atom model of the HIV virus capsid. In 2007, NAMD was one of the first codes to run on a GPU cluster. It’s now being prepared for the ORNL Summit supercomputer, which will feature IBM Power9 CPUs, NVIDIA Volta GPUs, and the NVLink CPU-GPU interconnect. Come learn the opportunities and pitfalls of taking GPU computing to the petascale, along with recent NAMD performance advances and early results from the Summit Power8+/P100 “Minsky” development cluster.

See the Full Listing of HPC & Supercomputing Sessions. Early Bird Registration Rates for GTC end April 5.

Check out our insideHPC Events Calendar

Leave a Comment

*

Resource Links: