Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Pre-exascale Architectures: OpenPOWER Performance and Usability Assessment for French Scientific Community

Gabriel Hautreux from GENCI gave this talk at the NVIDIA GPU Technology Conference. “The talk will present the OpenPOWER platform bought by GENCI and provided to the scientific community. Then, it will present the first results obtained on the platform for a set of about 15 applications using all the solutions provided to the users (CUDA,OpenACC,OpenMP,…). Finally, a presentation about one specific application will be made regarding its porting effort and techniques used for GPUs with both OpenACC and OpenMP.”

Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the Swiss HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models. For the Deep Learning domain, we will focus on popular Deep Learning frameworks (Caffe, CNTK, and TensorFlow) to extract performance and scalability with MVAPICH2-GDR MPI library.”

How to Prepare Weather and Climate Models for Future HPC Hardware

Peter Dueben from ECMWF gave this talk at the NVIDIA GPU Technology Conference. “Learn how one of the leading institutes for global weather predictions, the European Centre for Medium-Range Weather Forecasts (ECMWF), is preparing for exascale supercomputing and the efficient use of future HPC computing hardware. I will name the main reasons why it is difficult to design efficient weather and climate models and provide an overview on the ongoing community effort to achieve the best possible model performance on existing and future HPC architectures.”

Podcast: Terri Quinn on Hardware and Integration at the Exacale Computing Project

In this podcast, Terri Quinn from LLNL provides an update on Hardware and Integration (HI) at the Exascale Computing Project. “The US Department of Energy (DOE) national laboratories will acquire, install, and operate the nation’s first exascale-class systems. ECP is responsible for assisting with applications and software and accelerating the research and development of critical commercial exascale system hardware. ECP’s Hardware and Integration research focus area (HI), was created to help the laboratories and the ECP teams achieve success through mutually beneficial collaborations.”

Video: Doug Kothe Looks Ahead at The Exascale Computing Project

In this video, Doug Kothe from ORNl provides an update on the Exascale Computing Project. “With respect to progress, marrying high-risk exploratory and high-return R&D with formal project management is a formidable challenge. In January, through what is called DOE’s Independent Project Review, or IPR, process, we learned that we can indeed meet that challenge in a way that allows us to drive hard with a sense of urgency and still deliver on the essential products and solutions. In short, we passed the review with flying colors—and what’s especially encouraging is that the feedback we received tells us what we can do to improve.”

HPE: Design Challenges at Scale

Jimmy Daley from HPE gave this talk at the HPC User Forum in Tucson. “High performance clusters are all about high speed interconnects. Today, these clusters are often built out of a mix of copper and active optical cables. While optical is the future, the cost of active optical cables is 4x – 6x that of copper cables. In this talk, Jimmy Daley looks at the tradeoffs system architects need to make to meet performance requirements at reasonable cost.”

Let’s Talk Exascale: Making Software Development more Efficient

In this episode of Let’s Talk Exascale, Mike Heroux from Sandia National Labs describes the Exascale Computing Project’s Software Development Kit, an organizational approach to reduce the complexity of the project management of ECP software technology. “My hope is that as we create these SDKs and bring these independently developed products together under a collaborative umbrella, that instead of saying that each of these individual products is available independently, we can start to say that an SDK is available.”

Let’s Talk Exascale: Transforming Combustion Science and Technology

In this episode of Let’s Talk Exascale, Jackie Chen from Sandia National Laboratories describes the Combustion-Pele project, which uses predictive simulation for the development of cleaner-burning engines. “Almost all practical combustors operate under extremely high turbulence levels to increase the rate of combustion providing high efficiency, but there are still outstanding challenges in understanding how turbulence affects auto-ignition.”

Radio Free HPC Does the Math on pending CORAL-2 Exascale Machines

In this podcast, the Radio Free HPC team takes a look at daunting performance targets for the DOE’s CORAL-2 RFP for Exascale Computers. “So, 1.5 million TeraFlops divided by 7.8 Teraflops per GPU is how many individual accelerators you need, and that’s 192,307. Now, multiply that by 300 watts per accelerator, and it is clear we are going to need something all-new to get where we want to go.”

HLRS and Wuhan to Collaborate on Exascale Computing

The High-Performance Computing Center Stuttgart (HLRS) and Supercomputing Center of Wuhan University have announced plans to cooperate on technology and training projects. “HLRS and the Supercomputing Center at Wuhan University plan to exchange scientists and to focus on key research topics in high-performance computing. Both sides will also share experience in installing large-scale computing systems, particularly because both Wuhan and Stuttgart aim to develop exascale systems.”