In this video from the 2017 HPC Advisory Council Stanford Conference, DK Panda presents: Best Practices: Designing HPC & Deep Learning Middleware for Exascale Systems.
“This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”
DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 400 papers in the area of high-end computing and networking. The MVAPICH2 libraries with support for MPI and PGAS on IB, Omni-Path, iWARP, RoCE, GPGPUs, Xeon Phis and virtualization, are currently being used by more than 2,725 organizations worldwide (in 83 countries). More than 407,000 downloads of these libraries have taken place from the project’s site. These libraries are empowering several InfiniBand clusters (including the 1st, 13th, 17th and 40th ranked ones) in the TOP500 list. The RDMA packages for Apache Hadoop, Apache Spark and Memcached together with OSU HiBD benchmarks from his group (http://hibd.cse.ohio- state.edu) are also publicly available. These packages are currently being used by more than 205 organizations from 29 countries. More than 19,000 downloads of these packages have taken place from the project’s site. High- performance Deep Learning frameworks like Caffe are available from the newly created High-Performance Deep Learning project site. He is an IEEE Fellow.