“These application development awards are a major first step toward achieving mission critical application readiness on the path to exascale,” said ECP director Paul Messina. “A key element of the ECP’s mission is to deliver breakthrough HPC modeling and simulation solutions that confidently deliver insight and predict answers to the most critical U.S. problems and challenges in scientific discovery, energy assurance, economic competitiveness, and national security,” Messina said. “Application readiness is a strategic aspect of our project and foundational to the development of holistic, capable exascale computing environments.”
The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.
Paul Messina presented this talk at the 2016 Argonne Training Program on Extreme-Scale Computing. “The President’s NSCI initiative calls for the development of Exascale computing capabilities. The U.S. Department of Energy has been charged with carrying out that role in an initiative called the Exascale Computing Project (ECP). Messina has been tapped to lead the project, heading a team with representation from the six major participating DOE national laboratories: Argonne, Los Alamos, Lawrence Berkeley, Lawrence Livermore, Oak Ridge and Sandia. The project program office is located at Oak Ridge.
Thomas Schulthess presented this talk at the MVAPICH User Group. “Implementation of exascale computing will be different in that application performance is supposed to play a central role in determining the system performance, rather than just considering floating point performance of the high-performance Linpack benchmark. This immediately raises the question as to what the yardstick will be, by which we measure progress towards exascale computing. I will discuss what type of performance improvements will be needed to reach kilometer-scale global climate and weather simulations. This challenge will probably require more than exascale performance.”
In this podcast, the Radio Free HPC team looks at why it’s so difficult for new processor architectures to gain traction in HPC and the datacenter. Plus, we introduce a new regular feature for our show: The Catch of the Week.
“Clear trends in the past and current petascale systems (i.e., Jaguar and Titan) and the new generation of systems that will transition us toward exascale (i.e., Aurora and Summit) outline how concurrency and peak performance are growing dramatically, however, I/O bandwidth remains stagnant. In this talk, we explore challenges when dealing with I/O-ignorant high performance computing systems and opportunities for integrating I/O awareness in these systems.”
“When the history of HPC is viewed in terms of technological approaches, three epochs emerge. The most recent epoch, that of co-design systems, is new and somewhat unfamiliar to many HPC practitioners. Each epoch is defined by a fundamental shift in design, new technologies, and the economics of the day. “A network co-design model allows data algorithms to be executed more efficiently using smart interface cards and switches. As co-design approaches become more mainstream, design resources will begin to focus on specific issues and move away from optimizing general performance.”
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
“Between 2011 and 2016, eight projects, with a total budget of more than €50 Million, were selected for this first push in the direction of the next- generation supercomputer: CRESTA, DEEP and DEEP-ER, EPiGRAM, EXA2CT, Mont- Blanc (I + II) and Numexas. The challenges they addressed in their projects were manifold: innovative approaches to algorithm and application development, system software, energy efficiency, tools and hardware design took centre stage.”
Raj Hazra presented this talk at ISC 2016. “As part of the company’s launch of the Intel Xeon Phi processor, Hazra describes how how cognitive computing and HPC are going to work together. “Intel will introduce and showcase a range of new technologies helping to fuel the path to deeper insight and HPC’s next frontier. Among this year’s new products is the Intel Xeon Phi processor. Intel’s first bootable host processor is specifically designed for highly parallel workloads. It is also the first to integrate both memory and fabric technologies. A bootable x86 CPU, the Intel Xeon Phi processor offers greater scalability and is capable of handling a wider variety of workloads and configurations than accelerator products.”