Bursting into the public Cloud: Experiences at large scale for IceCube

Print Friendly, PDF & Email

Igor Sfiligoi from SDSC

In this video from the ECSS Symposium series, Igor Sfiligoi from SDSC presents: Bursting into the public Cloud – Sharing my experience doing it at large scale for IceCube.

When compute workflow needs spike well in excess of the capacity of a local compute resource, capacity should be temporarily provisioned from somewhere else to both meet deadlines and to increase scientific output. Public Clouds have become an attractive option due to their ability to be provisioned with minimal advance notice. I have recently helped IceCube expand their resource pool by a few orders of magnitude, first to 380 PFLOP32s for a few hours and later to 170 PFLOP32s for a whole workday. In the process we moved (50 TB) of data to and from the clouds, showing that networking is not a limiting factor, either. While there was a non-negligible dollar cost involved with each, the effort involved was quite modest. In this session I will explain what was done and how, alongside an overview of why IceCube needs so much compute.

Igor Sfiligoi is Lead Scientific Software Developer and Researcher at the San Diego Supercomputer Center.

Sign up for our insideHPC Newsletter