Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Why UIUC Built HPC Application Containers for NVIDIA GPU Cloud

In this video from the GPU Technology Conference, John Stone from the University of Illinois describes how container technology in the NVIDIA GPU Cloud help the University distribute accelerated applications for science and engineering. “Containers are a way of packaging up an application and all of its dependencies in such a way that you can install them collectively on a cloud instance or a workstation or a compute node. And it doesn’t require the typical amount of system administration skills and involvement to put one of these containers on a machine.”

Video: HPC Use for Earthquake Research

Christine Goulet from the Southern California Earthquake Center gave this talk at the HPC User Forum in Tucson. “SCEC coordinates fundamental research on earthquake processes using Southern California as its principal natural laboratory. The SCEC community advances earthquake system science through synthesizing knowledge of earthquake phenomena through physics-based modeling, including system-level hazard modeling and communicating our understanding of seismic hazards to reduce earthquake risk and promote community resilience.”

Intel Open Sources nGraph Deep Neural Network model for Multiple Devices

Over at Intel, Scott Cyphers writes that the company has open-sourced nGraph, a framework-neutral Deep Neural Network (DNN) model compiler that can target a variety of devices. With nGraph, data scientists can focus on data science rather than worrying about how to adapt their DNN models to train and run efficiently on different devices. Continue reading below for highlights of our engineering challenges and design decisions, and see GitHub, our documentation, and our SysML paper for additional details.

Universities step up to Cloud Bursting

In this special guest feature, Mahesh Pancholi from OCF writes that many of universities are now engaging in cloud bursting and are regularly taking advantage of public cloud infrastructures that are widely available from large companies like Amazon, Google and Microsoft. “By bursting into the public cloud, the university can offer the latest and greatest technologies as part of its Research Computing Service for all its researchers.”

Charliecloud: Unprivileged Containers for User-Defined Software Stacks

“What if I told you there was a way to allow your customers and colleagues to run their HPC jobs inside the Docker containers they’re already creating? or an easily learned, easily employed method for consistently reproducing a particular application environment across numerous Linux distributions and platforms? There is. In this talk/tutorial session, we’ll explore the problem domain and all the previous solutions, and then we’ll discuss and demo Charliecloud, a simple, streamlined container runtime that fills the gap between Docker and HPC — without requiring HPC Admins to lift a finger!”

Exascale Computing for Long Term Design of Urban Systems

In this episode of Let’s Talk Exascale, Charlie Catlett from Argonne National Laboratory and the University of Chicago describes how extreme scale HPC will be required to better build Smart Cities. “Urbanization is a bigger set of challenges in the developing world than in the developed world, but it’s still a challenge for us in US and European cities and Japan.”

Video: Addressing Key Science Challenges with Adversarial Neural Networks

Wahid Bhimji from NERSC gave this talk at the 2018 HPC User Forum in Tucson. “Machine Learning and Deep Learning are increasingly used to analyze scientific data, in fields as diverse as neuroscience, climate science and particle physics. In this page you will find links to examples of scientific use cases using deep learning at NERSC, information about what deep learning packages are available at NERSC, and details of how to scale up your deep learning code on Cori to take advantage of the compute power available from Cori’s KNL nodes.”

David Kepczynski from GE to Chair ECP Industry Council

Today the Exascale Computing Project appointed David Kepczynski from GE Global Research as the new chair of the ECP Industry Council. “We are thrilled that Dave Kepczynski has agreed to take the leadership reins for the ECP’s Industry Council,” ECP Director Doug Kothe said. “He has been an active member of the Industry Council since day one, and his experience and vision pertaining to the potential impact of exascale on U.S. industries is invaluable to our mission.” Kothe added, “We wish to thank Michael McQuade for his pioneering leadership role with this external advisory group, and we wish him well with his future plans.”

Introducing the SPEC High Performance Group and HPC Benchmark Suites

Robert Henschel from Indiana University gave this talk at the Swiss HPC Conference. “In this talk, I will present an overview of the High Performance Group as well as SPEC’s benchmarking philosophy in general. Most everyone knows SPEC for the SPEC CPU benchmarks that are heavily used when comparing processor performance, but the High Performance Group specifically focusses on whole system benchmarking utilizing the parallelization paradigms common in HPC, like MPI, OpenMP and OpenACC.”

Ceph on the Brain: Storage and Data-Movement Supporting the Human Brain Project

Adrian Tate from Cray and Stig Telfer from StackHPC gave this talk at the 2018 Swiss HPC Conference. “This talk will describe how Cray, StackHPC and the HBP co-designed a next-generation storage system based on Ceph, exploiting complex memory hierarchies and enabling next-generation mixed workload execution. We will describe the challenges, show performance data and detail the ways that a similar storage setup may be used in HPC systems of the future.”