Sign up for our newsletter and get the latest HPC news and analysis.

Burst Buffers and Data-Intensive Scientific Computing

Glenn Lockwood

“For those who haven’t been following the details of one of DOE’s more recent procurement rounds, the NERSC-8 and Trinity request for proposals (RFP) explicitly required that all vendor proposals include a burst buffer to address the capability of multi-petaflop simulations to dump tremendous amounts of data in very short order. The target use case is for petascale checkpoint-restart, where the memory of thousands of nodes (hundreds of terabytes of data) needs to be flushed to disk in an amount of time that doesn’t dominate the overall execution time of the calculation.”

Intel’s Eric Barton on the Need to Move Beyond Posix for Exascale IO

Eric Barton

In this video from ISC’14, Eric Barton from Intel describes the goals of the two-year FastForward Storage and IP Project, which the company wrapped up recently.

International Exascale Workshop Culminates with US-Japan Collaboration Agreement

beckman

On June 22, the US Department of Energy (DOE) and Japan’s Ministry of Education, Culture, Sports, Science and Technology (MEXT) signed an agreement to collaborate on exascale supercomputing technologies for the scientific community. In a nutshell, the plan is to build a common OS kernel that can be used by all post-petascale systems, regardless of hardware eccentricities.

Video: DEEP and DEEP-ER Project Updates at ISC’14

deep

In this video from ISC’14, the DEEP and DEEP-ER Project teams describe their prototype hardware and software. “The DEEP consortium will develop a novel, Exascale-enabling supercomputing architecture with a matching SW stack and a set of optimized grand-challenge simulation applications. DEEP takes the concept of compute acceleration to a new level: instead of adding accelerator cards to Cluster nodes, an accelerator Cluster, called Booster, will complement a conventional HPC system and increase its compute performance.”

Video: Panel Discussion on Exascale Computing

panel

In this video from the 2014 HPC Advisory Council Europe Conference, Gilad Shainer from HPCAC moderates a panel discussion on exascale computing.

DEEP Project Testing Smart Acceleration for Clusters

thomas

Thomas Lippert from the Jülich Supercomputing Centre writes that the DEEP project for exascale research is pushing the limits when it comes to programming models. “In the last couple of weeks DEEP has gone through a very exciting phase – basically the ultimate baptism of fire for our concept: The new hardware has first come to life.”

Complete Archives of The Exascale Report Now Available

exascale

Welcome to the new home of The Exascale Report! Acquired by insideHPC Media in February, the complete archives of The Exascale Report are now available free of charge to anyone who registers for premium content.

Video: The Future of HPC and The Path to Exascale

“In this session we will discuss technologies recently announced by NVIDIA and how they help address key HPC challenges such as energy efficiency to get closer to achieving Exascale. We will also discuss the use of HPC in Brazil and how Brazil compares and can learn from the experience of other BRIC countries.”

Video: The Exascale Architecture

crupnicoff_diego

“Exascale levels of computing pose many system- and application- level computational challenges. Mellanox Technologies, Inc. as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”

Video: Designing Software Libraries and Middleware for Exascale Systems

DK Panda

“This talk will focus on challenges in designing software libraries and middleware for upcoming exascale systems with millions of processors and accelerators. Two kinds of application domains – Scientific Computing and Big data will be considered. For scientific computing domain, we will discuss about challenges in designing runtime environments for MPI and PGAS (UPC and OpenSHMEM) programming models by taking into account support for multi-core, high-performance networks, GPGPUs and Intel MIC. “