High-Performance and Scalable Designs of Programming Models for Exascale Systems

Print Friendly, PDF & Email

DK Panda, Ohio State University

In this video from the Switzerland HPC Conference, DK Panda from Ohio State University presents: High-Performance and Scalable Designs of Programming Models for Exascale Systems.

“This talk will focus on challenges in designing programming models and runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (KNL and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries, will be presented.”

DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 400 papers in the area of high-end computing and networking. The MVAPICH2 libraries with support for MPI and PGAS on IB, Omni-Path, iWARP, RoCE, GPGPUs, Xeon Phis and virtualization (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,725 organizations worldwide (in 83 countries). More than 407,000 downloads of these libraries have taken place from the project’s site. These libraries are empowering several InfiniBand clusters (including the 1st, 13th, 17th and 40th ranked ones) in the TOP500 list. The RDMA packages for Apache Hadoop, Apache Spark and Memcached together with OSU HiBD benchmarks from his group are also publicly available. These packages are currently being used by more than 205 organizations from 29 countries. More than 19,000 downloads of these packages have taken place from the project’s site. High-performance Deep Learning frameworks like Caffe are available from the newly created High-Performance Deep Learning project site.

See more talks in the Switzerland HPC Conference Video Gallery

Sign up for our insideHPC Newsletter