Video: Heterogeneous Computing at the Large Hadron Collider

In this video, Philip Harris from MIT presents: Heterogeneous Computing at the Large Hadron Collider. “Only a small fraction of the 40 million collisions per second at the Large Hadron Collider are stored and analyzed due to the huge volumes of data and the compute power required to process it. This project proposes a redesign of the algorithms using modern machine learning techniques that can be incorporated into heterogeneous computing systems, allowing more data to be processed and thus larger physics output and potentially foundational discoveries in the field.”

Deep Learning for Higgs Boson Identification and Searches for New Physics at the Large Hadron Collider

Mark Neubauer from the University of Illinois gave this talk at the Blue Waters Symposium. “In this talk, we present our work using deep learning techniques on the Blue Waters supercomputing to develop and optimize a novel method of identifying the decays of highly-boosted Higgs bosons produced at the LHC a signature of new particles and/or phenomena at the energy frontier of particle physics. We also discuss our ongoing work using Blue Waters to develop scalable cyberinfrastructure for sustainable and reproducible data analysis workflows through the NSF-funded IRIS-HEP Institute and SCAILFIN project.”

Converging Workflows Pushing Converged Software onto HPC Platforms

Are we witnessing the convergence of HPC, big data analytics, and AI? Once, these were separate domains, each with its own system architecture and software stack, but the data deluge is driving their convergence. Traditional big science HPC is looking more like big data analytics and AI, while analytics and AI are taking on the flavor of HPC.

Piz Daint Supercomputer to Power LHC Computing Grid

The fastest supercomputer in Europe will soon join the WLHC Grid. Housed at CSCS in Switzerland, the Piz Daint supercomputer be used for data analysis from Large Hadron Collider (LHC) experiments. Until now, the ATLAS, CMS and LHCb particle detectors delivered their data to “Phoenix” system for analysis and comparison with the results of previous simulations.

Argonne is Supercomputing Big Data from the Large Hadron Collider

Over at Argonne, Madeleine O’Keefe writes that the Lab is supporting CERN researchers working to interpret Big Data from the Large Hadron Collider (LHC), the world’s largest particle accelerator. The LHC is expected to output 50 petabytes of data this year alone, the equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

Generative Models for Application-Specific Fast Simulation of LHC Collision Events

Maurizio Pierini from CERN gave this talk at PASC18. “We investigate the possibility of using generative models (e.g., GANs and variational autoencoders) as analysis-specific data augmentation tools to increase the size of the simulation data used by the LHC experiments. With the LHC entering its high-luminosity phase in 2025, the projected computing resources will not be able to sustain the demand for simulated events. Generative models are already investigated as the mean to speed up the centralized simulation process.”

ISC Keynote: Tackling Tomorrow’s Computing Challenges Today at CERN

In this keynote video from ISC 2018, Physicist and CTO of CERN openlab discusses the demands of capturing, storing, and processing the large volumes of data generated by the LHC experiments. “CERN openlab is a unique public-private partnership between The European Organization for Nuclear Research (CERN) and some of the world`s leading ICT companies. It plays a leading role in helping CERN address the computing and storage challenges related to the Large Hadron Collider’s (LHC) upgrade program.”

Video: Computing Challenges at the Large Hadron Collider

CERN’s Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. “The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. “In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced.”

ISC 2017 Distinguished Talks to Focus on Data Analytics in Manufacturing & Science

Today ISC 2017 announced that it’s Distinguished Talk series will focus on Data Analytics in manufacturing and scientific applications. One of the Distinguished Talks will be given by Dr. Sabina Jeschke from the Cybernetics Lab at the RWTH Aachen University on the topic of, “Robots in Crowds – Robots and Clouds.” Jeschke’s presentation will be followed by one from physicist Kerstin Tackmann, from the German Electron Synchrotron (DESY) research center, who will discuss big data and machine learning techniques used for the ATLAS experiment at the Large Hadron Collider.

Cray CS400 Supercomputer Coming to Baylor University

Today Cray announced that Baylor University has selected a Cray CS400 cluster supercomputer, further demonstrating its commitment to transformative research. The Cray system will serve as the primary high performance computing platform for Baylor researchers and will be supported by the Academic and Research Computing Services group (ARCS) of the Baylor University Libraries. The Cray CS400 cluster supercomputer will replace Baylor’s current HPC system, enhancing and expanding its capacity for computational research projects