Road to Exascale Panel


In this video from the 2014 HPCAC Stanford HPC & Exascale Conference, panelists discuss the technologies that will need to come together on the Road to Exascale.

IBM Data Infrastructure for Exascale Computing


Scott Fadden from IBM presented this talk at the Stanford HPC Conference. “What does it mean to provide data to an Exascale system? Many believe that the current model of adding more disks and installing faster networks won’t get us there. So how do you get the right data to the right processor at the right time? How dow begin to leverage new storage technologies? This presentation explores some the Exascale challenges and provides insight on what is being done today to learn about and prepare for managing data in an Exascale system.”

CRESTA Project Focuses on “Deep 6″ Applications for Exascale


Over at the Cray Blog, Jason Beech-Brandt writes that the European CRESTA project is focusing on six applications with exascale potential.

A System View of Productive Exascale Systems


Barry Bolding from Cray presented this talk at the 2014 HPCAC Stanford HPC & Exascale Conference. “Productive Exascale is not simply about achieving a set of technologies and performance metrics, it is about providing systems that fit into the production scientific workflow environments that will exist at the end of this decade.”

Programming Models for Exascale Systems


DK Panda from Ohio State University presented this talk at 2014 HPC Advisory Council Stanford Conference. “This talk will focus on programming models and their designs for upcoming exascale systems with millions of processors and accelerators. Current status and future trends of MPI and PGAS (UPC and OpenSHMEM) programming models will be presented.”

Challenges of Exascale Systems from an Applications Perspective


Intel’s Mark Seager presented this talk at 2014 HPC Advisory Council Stanford Conference. “In this talk, we will review the many challenges of building practical Exascale systems by the end of the decade and Extreme scale systems in the 2020s. Some of these challenges, such as extreme levels of parallelism, have direct impact on applications, while others, such as new data paradigms, offer real breakthrough application and scientific opportunities.”

Architecture-aware Algorithms and Software for Petascale and Exascale


Jack Dongarra presented this talk at SC13. “We use a hybridization methodology that is built on representing linear algebra algorithms as collections of tasks and data dependencies, as well as properly scheduling the tasks’ execution over the available multicore and GPU hardware components.”

Podcast: What the Nvidia Tegra K1 Means for the Future of HPC


In this podcast, analyst and author Rob Farber looks at Nvidia’s launch of Tegra K1 processor. Designed for high-resolution mobile devices, the K1 features the same high-performance Kepler-based GPU that drives the world’s most powerful supercomputers.

HPCAC Stanford Conference & Exascale Workshop Releases Agenda


“The HPC Advisory Council’s worldwide conferences and workshops are excellent educational opportunities for HPC and data center IT professionals who are looking to deploy or provide additional enhancements and functionality to their advanced high-performance solutions.”

RIKEN to Host Exascale Supercomputer in 2020


This week the Japanese Ministry of Education, Culture, Sports, Science and Technology selected RIKEN to develop a new exascale supercomputer. With a planned deployment in 2020, the new system is expected to keep Japan at the leading edge of computing science and technology.