Exascale Architecture Trends and Implications for Programming Systems

Print Friendly, PDF & Email

shalfIn this video from EASC2016 in Stockholm, John Shalf from NERSC presents: Exascale Computer Architecture Trends and Implications for Programming Systems. During his introduction, Shalf announces that he is leaving his CTO role at NERSC to be a Co-Lead on Exascale Hardware for the DoE Exascale Program.

“For the past twenty-five years, a single model of parallel programming (largely bulk-synchronous MPI), has for the most part been sufficient to permit translation of this into reasonable parallel programs for more complex applications. In 2004, however, a confluence of events changed forever the architectural landscape that underpinned our current assumptions about what to optimize for when we design new algorithms and applications. We have been taught to prioritize and conserve things that were valuable 20 years ago, but the new technology trends have inverted the value of our former optimization targets. The time has come to examine the end result of our extrapolated design trends and use them as a guide to re-prioritize what resources to conserve in order to derive performance for future applications. This talk will describe the challenges of programming future computing systems. It will then provide some highlights from the search for durable programming abstractions more closely track emerging computer technology trends so that when we convert our codes over, they will last through the next decade.”

John Shalf is one of our Rock Stars of HPC. He is CTO for National Energy Research Supercomputing Center and Department Head for Computer Science at Lawrence Berkeley National Laboratory. He is a co-author of over 60 publications in the field of parallel computing software and HPC technology, including three best papers and the widely cited report “The Landscape of Parallel Computing Research: A View from Berkeley” (with David Patterson and others), as well as “ExaScale Software Study: Software Challenges in Extreme Scale Systems,” which sets the Defense Advanced Research Project Agency’s (DARPA’s) information technology research investment strategy for the next decade. He was a member of the Berkeley Lab/NERSC team that won a 2002 R&D 100 Award for the RAGE robot. Before joining Berkeley Lab in 2000, he was a research programmer at the National Center for Supercomputing Applications at the University of Illinois and a visiting scientist at the Max-Planck-Institut für Gravitationphysick/Albert Einstein Institute in Potsdam, Germany, where he co-developed the Cactus code framework for computational astrophysics.

John Shalf will moderate a panel discussion entitled Beyond von Neumann at ISC 2016 in Frankfurt.

Sign up for our insideHPC Newsletter