In this video from the 2014 HPCAC Stanford HPC & Exascale Conference, panelists discuss the technologies that will need to come together on the Road to Exascale.
Scott Fadden from IBM presented this talk at the Stanford HPC Conference. “What does it mean to provide data to an Exascale system? Many believe that the current model of adding more disks and installing faster networks won’t get us there. So how do you get the right data to the right processor at the right time? How dow begin to leverage new storage technologies? This presentation explores some the Exascale challenges and provides insight on what is being done today to learn about and prepare for managing data in an Exascale system.”
Barry Bolding from Cray presented this talk at the 2014 HPCAC Stanford HPC & Exascale Conference. “Productive Exascale is not simply about achieving a set of technologies and performance metrics, it is about providing systems that fit into the production scientific workflow environments that will exist at the end of this decade.”
DK Panda from Ohio State University presented this talk at 2014 HPC Advisory Council Stanford Conference. “This talk will focus on programming models and their designs for upcoming exascale systems with millions of processors and accelerators. Current status and future trends of MPI and PGAS (UPC and OpenSHMEM) programming models will be presented.”
Intel’s Mark Seager presented this talk at 2014 HPC Advisory Council Stanford Conference. “In this talk, we will review the many challenges of building practical Exascale systems by the end of the decade and Extreme scale systems in the 2020s. Some of these challenges, such as extreme levels of parallelism, have direct impact on applications, while others, such as new data paradigms, offer real breakthrough application and scientific opportunities.”