“For those who haven’t been following the details of one of DOE’s more recent procurement rounds, the NERSC-8 and Trinity request for proposals (RFP) explicitly required that all vendor proposals include a burst buffer to address the capability of multi-petaflop simulations to dump tremendous amounts of data in very short order. The target use case is for petascale checkpoint-restart, where the memory of thousands of nodes (hundreds of terabytes of data) needs to be flushed to disk in an amount of time that doesn’t dominate the overall execution time of the calculation.”
On June 22, the US Department of Energy (DOE) and Japan’s Ministry of Education, Culture, Sports, Science and Technology (MEXT) signed an agreement to collaborate on exascale supercomputing technologies for the scientific community. In a nutshell, the plan is to build a common OS kernel that can be used by all post-petascale systems, regardless of hardware eccentricities.
In this video from ISC’14, the DEEP and DEEP-ER Project teams describe their prototype hardware and software. “The DEEP consortium will develop a novel, Exascale-enabling supercomputing architecture with a matching SW stack and a set of optimized grand-challenge simulation applications. DEEP takes the concept of compute acceleration to a new level: instead of adding accelerator cards to Cluster nodes, an accelerator Cluster, called Booster, will complement a conventional HPC system and increase its compute performance.”
Thomas Lippert from the Jülich Supercomputing Centre writes that the DEEP project for exascale research is pushing the limits when it comes to programming models. “In the last couple of weeks DEEP has gone through a very exciting phase – basically the ultimate baptism of fire for our concept: The new hardware has first come to life.”
“In this session we will discuss technologies recently announced by NVIDIA and how they help address key HPC challenges such as energy efficiency to get closer to achieving Exascale. We will also discuss the use of HPC in Brazil and how Brazil compares and can learn from the experience of other BRIC countries.”
“Exascale levels of computing pose many system- and application- level computational challenges. Mellanox Technologies, Inc. as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”
“This talk will focus on challenges in designing software libraries and middleware for upcoming exascale systems with millions of processors and accelerators. Two kinds of application domains – Scientific Computing and Big data will be considered. For scientific computing domain, we will discuss about challenges in designing runtime environments for MPI and PGAS (UPC and OpenSHMEM) programming models by taking into account support for multi-core, high-performance networks, GPGPUs and Intel MIC. “