Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Let’s Talk Exascale: Optimizing I/O at the ADIOS Project

In this episode of Let’s Talk Exascale, researchers from the ADIOS project describe how they are optimizing I/O on exascale architectures and making the code easily maintainable, sustainable, and extensible, while ensuring its performance and scalability. “The Adaptable I/O System (ADIOS) project in the ECP supports exascale applications by addressing their data management and in situ analysis needs.”

Intel to Showcase AI and HPC Demos at ISC 2018

Today Intel released a sneak peek at their plans for ISC 2018 in Frankfurt. The company will showcase how it’s helping AI developers, data scientists and HPC programmers transform industries by tapping into HPC to power the AI solutions. “ISC brings together academic and commercial disciplines to share knowledge in the field of high performance computing. Intel’s presence at the event will include keynotes, sessions, and booth demos that will be focused on the future of HPC technology, including Artificial Intelligence (AI) and visualization.”

Leadership Computing for Europe and the Path to Exascale Computing

Thomas Schulthess from CSCS gave this talk at the GPU Technology Conference. “With over 5000 GPU-accelerated nodes, Piz Daint has been Europes leading supercomputing systems since 2013, and is currently one of the most performant and energy efficient supercomputers on the planet. It has been designed to optimize throughput of multiple applications, covering all aspects of the workflow, including data analysis and visualisation. We will discuss ongoing efforts to further integrate these extreme-scale compute and data services with infrastructure services of the cloud. As Tier-0 systems of PRACE, Piz Daint is accessible to all scientists in Europe and worldwide. It provides a baseline for future development of exascale computing.”

Pre-exascale Architectures: OpenPOWER Performance and Usability Assessment for French Scientific Community

Gabriel Hautreux from GENCI gave this talk at the NVIDIA GPU Technology Conference. “The talk will present the OpenPOWER platform bought by GENCI and provided to the scientific community. Then, it will present the first results obtained on the platform for a set of about 15 applications using all the solutions provided to the users (CUDA,OpenACC,OpenMP,…). Finally, a presentation about one specific application will be made regarding its porting effort and techniques used for GPUs with both OpenACC and OpenMP.”

Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the Swiss HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models. For the Deep Learning domain, we will focus on popular Deep Learning frameworks (Caffe, CNTK, and TensorFlow) to extract performance and scalability with MVAPICH2-GDR MPI library.”

How to Prepare Weather and Climate Models for Future HPC Hardware

Peter Dueben from ECMWF gave this talk at the NVIDIA GPU Technology Conference. “Learn how one of the leading institutes for global weather predictions, the European Centre for Medium-Range Weather Forecasts (ECMWF), is preparing for exascale supercomputing and the efficient use of future HPC computing hardware. I will name the main reasons why it is difficult to design efficient weather and climate models and provide an overview on the ongoing community effort to achieve the best possible model performance on existing and future HPC architectures.”

Podcast: Terri Quinn on Hardware and Integration at the Exacale Computing Project

In this podcast, Terri Quinn from LLNL provides an update on Hardware and Integration (HI) at the Exascale Computing Project. “The US Department of Energy (DOE) national laboratories will acquire, install, and operate the nation’s first exascale-class systems. ECP is responsible for assisting with applications and software and accelerating the research and development of critical commercial exascale system hardware. ECP’s Hardware and Integration research focus area (HI), was created to help the laboratories and the ECP teams achieve success through mutually beneficial collaborations.”

Video: Doug Kothe Looks Ahead at The Exascale Computing Project

In this video, Doug Kothe from ORNl provides an update on the Exascale Computing Project. “With respect to progress, marrying high-risk exploratory and high-return R&D with formal project management is a formidable challenge. In January, through what is called DOE’s Independent Project Review, or IPR, process, we learned that we can indeed meet that challenge in a way that allows us to drive hard with a sense of urgency and still deliver on the essential products and solutions. In short, we passed the review with flying colors—and what’s especially encouraging is that the feedback we received tells us what we can do to improve.”

HPE: Design Challenges at Scale

Jimmy Daley from HPE gave this talk at the HPC User Forum in Tucson. “High performance clusters are all about high speed interconnects. Today, these clusters are often built out of a mix of copper and active optical cables. While optical is the future, the cost of active optical cables is 4x – 6x that of copper cables. In this talk, Jimmy Daley looks at the tradeoffs system architects need to make to meet performance requirements at reasonable cost.”

Let’s Talk Exascale: Making Software Development more Efficient

In this episode of Let’s Talk Exascale, Mike Heroux from Sandia National Labs describes the Exascale Computing Project’s Software Development Kit, an organizational approach to reduce the complexity of the project management of ECP software technology. “My hope is that as we create these SDKs and bring these independently developed products together under a collaborative umbrella, that instead of saying that each of these individual products is available independently, we can start to say that an SDK is available.”