Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Neutrino Telescope Data Management and Analysis

In this video from PASC18, Tessa Carver from the University of Geneva presents: Neutrino Telescope Data Management and Analysis. “We will describe the data flow structure of onsite DAQ to filtered steams for various physics scopes of IceCube and ANTARES and the plans for KM3NeT. The Data formats and data management software will also be described as well as plans for making data public.”

David Bader on Real World Challenges for Big Data Analytics

In this video from PASC18, David Bader from Georgia Tech summarizes his keynote talk on Big Data Analytics. “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms, and development of frameworks for solving these real-world problems on high performance computers.”

Characterizing Faults, Errors and Failures in Extreme-Scale Computing Systems

Christian Engelmann from ORNL gave this talk at PASC18. “Building a reliable supercomputer that achieves the expected performance within a given cost budget and providing efficiency and correctness during operation in the presence of faults, errors, and failures requires a full understanding of the resilience problem. The Catalog project develops a fault taxonomy, catalog and models that capture the observed and inferred conditions in current supercomputers and extrapolates this knowledge to future-generation systems. To date, the Catalog project has analyzed billions of node hours of system logs from supercomputers at Oak Ridge National Laboratory and Argonne National Laboratory. This talk provides an overview of our findings and lessons learned.”

The Search for Gravitational Waves

In this video from PASC18, Alexander Nitz from the Max Planck Institute for Gravitational Physics in Germany presents: The Search for Gravitational Waves. “The LIGO and Virgo detectors have completed a prolific observation run. We are now observing gravitational waves from both the mergers of binary black holes and neutron stars. We’ll discuss how these discoveries were made and look into what the near future of searching for gravitational waves from compact binary mergers will look like.”

Video: Large Scale Training for Model Optimization

Jakub Tomczak from the University of Amsterdam gave this talk at PASC18. “Deep generative models allow us to learn hidden representations of data and generate new examples. There are two major families of models that are exploited in current applications: Generative Adversarial Networks (GANs), and Variational Auto-Encoders (VAE). We will point out advantages and disadvantages of GANs and VAE. Some of most promising applications of deep generative models will be shown.”

Interview: Constantia Alexandrou on the Challenges of Quantum Chromodynamics

In this video from PASC18, Constantia Alexandrou from the University of Cyprus discusses her domain of expertise – quantum chromodynamics. “Many – if not most – fields in physics employ high performance computing, yet quantum chromodynamics (QCD) might be the premiere example of an area very difficult to understand outside of the field.”

Radio Free HPC Reviews Lincoln Labs Paper on Spectre/Meltdown Performance Hits

In this podcast, the Radio Free HPC team looks at a new whitepaper from Lincoln Labs focused on the performance hits Spectre/Meltdown mitigations. The news is not good. After that, Shahin point us to the story about how DARPA just allocated $75 Million in awards for thinking-outside-the-box computing innovation. They call it the Electronics Resurgence Initiative and the list of projects funded includes something called Software Defined Hardware.

Low-Mach Simulation of Flow and Heat Transfer in an Internal Combustion Engine

In this video from PASC18, Saumil Patel from Argonne describes his poster on engine combustion simulation. “This work marks a milestone achievement in using Nek5000, a highly-scalable computational fluid dynamics (CFD) solver, to capture turbulent flow and thermal fields inside realistic engine geometries. In the context of an arbitrary Lagrangian-Eulerian (ALE) framework, several algorithms have been developed and integrated into Nek5000 in order overcome the computational challenges associated with moving boundaries (i.e. valves and pistons).”

Easy and Efficient Multilevel Checkpointing for Extreme Scale Systems

Leonardo Bautista from the Barcelona Supercomputing Center gave this talk at PASC18. “Extreme scale supercomputers offer thousands of computing nodes to their users to satisfy their computing needs. As the need for massively parallel computing increases in industry, computing centers are being forced to increase in size and to transition to new computing technologies. In this talk, we will discuss how to guarantee high reliability to high performance applications running in extreme scale supercomputers. In particular, we cover the tools necessary to implement scalable multilevel checkpointing for tightly coupled applications.”

Abstractions and Directives for Adapting Wavefront Algorithms to Future Architectures

Robert Searles from the University of Delaware gave this talk at PASC18. “Architectures are rapidly evolving, and exascale machines are expected to offer billion-way concurrency. We need to rethink algorithms, languages and programming models among other components in order to migrate large scale applications and explore parallelism on these machines. Although directive-based programming models allow programmers to worry less about programming and more about science, expressing complex parallel patterns in these models can be a daunting task especially when the goal is to match the performance that the hardware platforms can offer.”