Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Agenda Posted: OpenPOWER 2018 Summit in Las Vegas

The OpenPOWER Summit has posted its speaker agenda. Held in conjunction with IBM Think 2018, the event takes place March 19 in Las Vegas. “The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry.”

Video: Computing Challenges at the Large Hadron Collider

CERN’s Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. “The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. “In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced.”

HACC: Fitting the Universe inside a Supercomputer

Nicholas Frontiere from the University of Chicago gave this talk at the DOE CSGF Program Review meeting. “In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one.”

Video: Deep Reinforcement Learning and Systems Infrastructure at DeepMind

In this video from HiPEAC 2018 in Manchester, Dan Belov from DeepMind describe the company’s machine learning technology and some of the challenges ahead. “DeepMind Inc. is well known for state of the art Deep Reinforcement Learning (DRL) algorithms such as DQN on Atari, A3C on DMLab and AlphaGo Zero. I would like to take you on a tour of challenges we encounter when training DRL agents on large workloads with hundreds of terabytes of data. I’ll talk about why DRL poses unique challenges when designing distributed systems and hardware as opposed to simple supervised learning. Finally I’d like to discuss opportunities for DRL to help systems design and operation.”

Interview: European cHiPSet Event focuses on High-Performance Modeling and Simulation for Big Data Applications

The cHIPSet Annual Plenary Meeting takes place in France next month. To learn more, we caught up with the Vice-Chair for the project, Dr. Horacio González-Vélez, Associate Professor and Head of the Cloud Competency Centre at the National College of Ireland. “The plenary meeting will feature a workshop entitled “Accelerating Modeling and Simulation in the Data Deluge Era”. We are expecting keynote presentations and panel discussions on how the forthcoming exascale systems will influence the analysis and interpretation of data, including the simulation of models, to match observation to theory.”

Call for Papers: International Workshop on In Situ Visualization

The 3rd International Workshop on In Situ Visualization has issued it’s Call for Papers. Held in conjunction with ISC 2018, WOIV 2018 takes place June 28 in Frankfurt, Germany. “Our goal is to appeal to a wide-ranging audience of visualization scientists, computational scientists, and simulation developers, who have to collaborate in order to develop, deploy, and maintain in situ visualization approaches on HPC infrastructures. We hope to provide practical take-away techniques and insights that serve as inspiration for attendees to implement or refine in their own HPC environments and to avoid pitfalls.”

PASC18 Keynote to Focus on Extreme-Scale Multi-Physics Earthquake Simulations

Today the PASC18 conference announced that Alice-Agnes Gabriel from Ludwig-Maximilian-University of Munich will deliver a keynote address on earthquake simulation. ” This talk will focus on using physics-based scenarios, modern numerical methods and hardware specific optimizations to shed light on the dynamics, and severity, of earthquake behavior. It will present the largest-scale dynamic earthquake rupture simulation to date, which models the 2004 Sumatra-Andaman event – an unexpected subduction zone earthquake which generated a rupture of over 1,500 km in length within the ocean floor followed by a series of devastating tsunamis.”

Updating the SC18 Technical Program to Inspire the Future

In this special guest feature, SC18 Technical Program Chair David Keyes from KAUST writes that important changes are coming to the world’s biggest HPC conference this November in Dallas.

Buying for Tomorrow: HPC Systems Procurement Matters

Ingrid Barcena from KU Leuven gave this talk at the HPC Knowledge Portal meeting in San Sebastián, Spain. “One of the biggest challenges when procuring High Performance Computing systems is to ensure that not only a faster machine than the previous one is bought but that the new system is well suited for the organization needs, fit within a limited budget and prove value for money. However, this is not a simple task and failing on buying the right HPC system can have tremendous consequences for an organization.”

Binary Packaging for HPC with Spack

Todd Gamblin from LLNL gave this talk at FOSDEM’18. “This talk will introduce binary packaging in Spack and some of the open infrastructure we have planned for distributing packages. We’ll talk about challenges to providing binaries for a combinatorially large package ecosystem, and what we’re doing in Spack to address these problems. We’ll also talk about challenges for implementing relocatable binaries with a multi-compiler system like Spack. “