Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Training Generative Adversarial Models over Distributed Computing Systems

In this video from PASC18, Gul Rukh Khattak from CERN presents: Training Generative Adversarial Models over Distributed Computing Systems.

In the High Energy Physics field, simulation of the interaction of particles in detectors material is a computing intensive task, even more so with complex and fined grained detectors. The complete and most accurate simulation of particle/matter interaction is primordial while calibrating and understanding the detector, but is seldomly required at physics analysis level, once several detector effects can hide slight imperfection in simulation. Some level of approximation is therefore acceptable and less computationally intensive approaches can be implemented. We present a fast simulation based on conditional generative adversarial networks.

We use a dataset composed of the energy deposition from electron, photons, charged and neutral hadrons in a fine grained digital calorimeter. The training of these models is quite computing intensive, even with the help of GPGPU, and we propose a method to train them over multiple nodes and GPGPU using a standard message passing interface. We report on the scalings of time-to-solution. Further tuning of hyper-parameter of the models are rendered tractable and we present the physics performance of the best model obtained via a Bayesian optimization using gaussian processes. We demonstrate how a high performance computing center can be utilized to globally optimize these kinds of models.”

Co-Author(s): Sofia Vallecorsa, Federico Carminati (CERN, Switzerland), Jean-Roch Vlimant (California Institute of Technology, USA)

Leave a Comment

*

Resource Links: