Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Converging Workflows Pushing Converged Software onto HPC Platforms

Are we witnessing the convergence of HPC, big data analytics, and AI? Once, these were separate domains, each with its own system architecture and software stack, but the data deluge is driving their convergence. Traditional big science HPC is looking more like big data analytics and AI, while analytics and AI are taking on the flavor of HPC.

Piz Daint Supercomputer to Power LHC Computing Grid

The fastest supercomputer in Europe will soon join the WLHC Grid. Housed at CSCS in Switzerland, the Piz Daint supercomputer be used for data analysis from Large Hadron Collider (LHC) experiments. Until now, the ATLAS, CMS and LHCb particle detectors delivered their data to “Phoenix” system for analysis and comparison with the results of previous simulations.

Argonne is Supercomputing Big Data from the Large Hadron Collider

Over at Argonne, Madeleine O’Keefe writes that the Lab is supporting CERN researchers working to interpret Big Data from the Large Hadron Collider (LHC), the world’s largest particle accelerator. The LHC is expected to output 50 petabytes of data this year alone, the equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

Generative Models for Application-Specific Fast Simulation of LHC Collision Events

Maurizio Pierini from CERN gave this talk at PASC18. “We investigate the possibility of using generative models (e.g., GANs and variational autoencoders) as analysis-specific data augmentation tools to increase the size of the simulation data used by the LHC experiments. With the LHC entering its high-luminosity phase in 2025, the projected computing resources will not be able to sustain the demand for simulated events. Generative models are already investigated as the mean to speed up the centralized simulation process.”

ISC Keynote: Tackling Tomorrow’s Computing Challenges Today at CERN

In this keynote video from ISC 2018, Physicist and CTO of CERN openlab discusses the demands of capturing, storing, and processing the large volumes of data generated by the LHC experiments. “CERN openlab is a unique public-private partnership between The European Organization for Nuclear Research (CERN) and some of the world`s leading ICT companies. It plays a leading role in helping CERN address the computing and storage challenges related to the Large Hadron Collider’s (LHC) upgrade program.”

Video: Computing Challenges at the Large Hadron Collider

CERN’s Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. “The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. “In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced.”

ISC 2017 Distinguished Talks to Focus on Data Analytics in Manufacturing & Science

Today ISC 2017 announced that it’s Distinguished Talk series will focus on Data Analytics in manufacturing and scientific applications. One of the Distinguished Talks will be given by Dr. Sabina Jeschke from the Cybernetics Lab at the RWTH Aachen University on the topic of, “Robots in Crowds – Robots and Clouds.” Jeschke’s presentation will be followed by one from physicist Kerstin Tackmann, from the German Electron Synchrotron (DESY) research center, who will discuss big data and machine learning techniques used for the ATLAS experiment at the Large Hadron Collider.

Cray CS400 Supercomputer Coming to Baylor University

Today Cray announced that Baylor University has selected a Cray CS400 cluster supercomputer, further demonstrating its commitment to transformative research. The Cray system will serve as the primary high performance computing platform for Baylor researchers and will be supported by the Academic and Research Computing Services group (ARCS) of the Baylor University Libraries. The Cray CS400 cluster supercomputer will replace Baylor’s current HPC system, enhancing and expanding its capacity for computational research projects

Supercomputing LHC Experiments with Titan

University of Texas at Arlington physicists are preparing the Titan supercomputer at Oak Ridge Leadership Computing Facility in Tennessee to support the analysis of data generated from the quadrillions of proton collisions expected during this season’s Large Hadron Collider particle physics experiments.

Why the HPC Industry will Converge on Europe at ISC 2016

In this special guest feature from Scientific Computing World, ISC’s Nages Sieslack highlights a convergence of technologies around HPC, a focus of the ISC High Performance conference, which takes place June 19-23 in Frankfurt. “In addition to the theme of convergent HPC technologies, this year’s conference will also offer two days of sessions in the industry track, specially designed to meet the interests of commercial users. Our focus is Industrie 4.0, a German strategic initiative conceived to take a leading role in pioneering industrial IT, which is currently revolutionizing engineering in the manufacturing sector.”