@HPCpodcast: UC Berkeley’s and LBNL’s Kathy Yelick on Exascale, the Future of Supercomputing, Partitioned Global Address Space and Diversity in HPC

Today, on the eve of Exascale Day, the @HPCpodcast is delighted to have Kathy Yelick as our special guest to observe Oct. 18 (1018 – a billion billion calculations per second). Dr. Yelick is the Robert S. Pepper Distinguished Professor of Electrical Engineering and Computer Sciences and the Vice Chancellor for Research at UC Berkeley, and Senior Faculty Scientist at Lawrence Berkeley National Laboratory.

Pagoda Project Rolls Out First Software Libraries for Exascale

The Pagoda Project—a three-year Exascale Computing Project software development program based at Lawrence Berkeley National Laboratory—has successfully reached a major milestone: making its open source software libraries publicly available as of September 30, 2017. “Our job is to ensure that the exascale applications reach key performance parameters defined by the DOE,” said Baden.

High-Performance and Scalable Designs of Programming Models for Exascale Systems

“This talk will focus on challenges in designing programming models and runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (KNL and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness.”

Designing HPC & Deep Learning Middleware for Exascale Systems

DK Panda from Ohio State University presented this deck at the 2017 HPC Advisory Council Stanford Conference. “This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”

What’s Next for HPC? A Q&A with Michael Kagan, CTO of Mellanox

As an HPC technology vendor, Mellanox is in the business of providing the leading-edge interconnects that drive many of the world’s fastest supercomputers. To learn more about what’s new for SC16, we caught up with Michael Kagan, CTO of Mellanox. “Moving InfiniBand beyond EDR to HDR is critical not only for HPC, but also for the numerous industries that are adopting AI and Big Data to make real business sense out the amount of data available and that we continue to collect on a daily basis.”

SC16 to Feature 38 HPC Workshops

Today SC16 announced that the conference will feature 38 high-quality workshops to complement the overall Technical Program events, expand the knowledge base of its subject area, and extend its impact by providing greater depth of focus.

Video: UPC++ Parallel Programming Extension

In this video from the 2016 OpenFabrics Workshop, Zili Zheng from LBNL presents: UPC++. “UPC++ is a parallel programming extension for developing C++ applications with the partitioned global address space (PGAS) model. UPC++ has demonstrated excellent performance and scalability with applications and benchmarks such as global seismic tomography, Hartree-Fock, BoxLib AMR framework and more. In this talk, we will give an overview of UPC++ and discuss the opportunities and challenges of leveraging modern network features.”

Kathy Yelick to Receive Ken Kennedy Award at SC15

Today ACM and IEEE announced that Kathy Yelick from LBNL will be the recipient of the 2015 ACM/IEEE Computer Society Ken Kennedy Award for innovative research contributions to parallel computing languages that have been used in both the research community and in production environments. She was also cited for her strategic leadership of the national research laboratories and for developing novel educational and mentoring tools. The award will be presented at SC15, which takes place Nov. 15-20, in Austin, Texas.

UPC and OpenSHMEM PGAS Models on GPU Clusters

“Learn about extensions that enable efficient use of Partitioned Global Address Space (PGAS) Models like OpenSHMEM and UPC on supercomputing clusters with NVIDIA GPUs. PGAS models are gaining attention for providing shared memory abstractions that make it easy to develop applications with dynamic and irregular communication patterns. However, the existing UPC and OpenSHMEM standards do not allow communication calls to be made directly on GPU device memory. This talk discusses simple extensions to the OpenSHMEM and UPC models to address this issue.”