Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Argonne Looks to Singularity for HPC Code Portability

Over at Argonne, Nils Heinonen writes that Researchers are using the open source Singularity framework as a kind of Rosetta Stone for running supercomputing code almost anywhere. “Once a containerized workflow is defined, its image can be snapshotted, archived, and preserved for future use. The snapshot itself represents a boon for scientific provenance by detailing the exact conditions under which given data were generated: in theory, by providing the machine, the software stack, and the parameters, one’s work can be completely reproduced.”

Fast Simulation with Generative Adversarial Networks

In this video from the Intel User Forum at SC18, Dr. Sofia Vallecorsa from CERN openlab presents: Fast Simulation with Generative Adversarial Networks. “This talk presents an approach based on generative adversarial networks (GANs) to train them over multiple nodes using TensorFlow deep learning framework with Uber Engineering Horovod communication library. Preliminary results on scaling of training time demonstrate how HPC centers could be used to globally optimize AI-based models to meet a growing community need.”

Micron Joins CERN openlab

Last week at SC18, Micron announced that the company has joined CERN openlab, a unique public-private partnership, by signing a three-year agreement. Under the agreement, Micron will provide CERN with advanced next-generation memory solutions to further machine learning capabilities for high-energy physics experiments at the laboratory. Micron’s memory solutions that combine neural network capabilities will be tested in the data-acquisition systems of experiments at CERN.

Video: How Ai is helping Scientists with the Large Hadron Collider

In this video from SC18 in Dallas, Dr. Sofia Vallecorsa from CERN OpenLab describes how Ai is being used in design of experiments for the Large Hadron Collider. “An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learning. The CERN team demonstrated that AI-based models have the potential to act as orders-of-magnitude-faster replacements for computationally expensive tasks in simulation, while maintaining a remarkable level of accuracy.”

Argonne is Supercomputing Big Data from the Large Hadron Collider

Over at Argonne, Madeleine O’Keefe writes that the Lab is supporting CERN researchers working to interpret Big Data from the Large Hadron Collider (LHC), the world’s largest particle accelerator. The LHC is expected to output 50 petabytes of data this year alone, the equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

Generative Models for Application-Specific Fast Simulation of LHC Collision Events

Maurizio Pierini from CERN gave this talk at PASC18. “We investigate the possibility of using generative models (e.g., GANs and variational autoencoders) as analysis-specific data augmentation tools to increase the size of the simulation data used by the LHC experiments. With the LHC entering its high-luminosity phase in 2025, the projected computing resources will not be able to sustain the demand for simulated events. Generative models are already investigated as the mean to speed up the centralized simulation process.”

Training Generative Adversarial Models over Distributed Computing Systems

Gul Rukh Khattak from CERN gave this talk at PASC18. “We use a dataset composed of the energy deposition from electron, photons, charged and neutral hadrons in a fine grained digital calorimeter. The training of these models is quite computing intensive, even with the help of GPGPU, and we propose a method to train them over multiple nodes and GPGPU using a standard message passing interface. We report on the scalings of time-to-solution.”

Addressing Computing Challenges at CERN openlab

In this special guest feature from Scientific Computing World, Robert Roe speaks with Dr Maria Girone, Chief Technology Officer at CERN openlab ahead of her keynote presentation at ISC High Performance. “The challenge of creating the largest particle accelerator is now complete but there is another challenge – harnessing all of the data produced through experimentation. This will become even greater when the ‘high-luminosity’ LHC experiments begin in 2026.”

Agenda Posted for Mass Storage Conference in Santa Clara

The 34th International Conference on Massive Storage Systems and Technologies (MSST 2018) has posted their Speaker Agenda. The event takes place May 14-16 in Santa Clara, California. “Join the discussion on webscale IT, and the demand on storage systems from IoT, healthcare, scientific research, and the continuing stream of smart applications (apps) for mobile devices.”

Video: Computing Challenges at the Large Hadron Collider

CERN’s Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. “The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. “In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced.”