“Every day, the computing power of high-performance computing (HPC) clusters helps scientists make breakthroughs, such as proving the existence of gravitational waves and screening new compounds for new drugs. Yet building HPC clusters is out of reach for most organizations, due to the upfront hardware costs and ongoing operational expenses. Now the speed of innovation is only bound by your imagination, not your budget. Researchers can run one cluster for 10,000 hours or 10,000 clusters for one hour anytime, from anywhere, and both cost the same in the cloud. And with the availability of Public Data Sets in Amazon S3, petabyte scale data is instantly accessible in the cloud. Attend and learn how to build HPC clusters on the fly, leverage Amazon’s Spot market pricing to minimize the cost of HPC jobs, and scale HPC jobs on a small budget, using all the same tools you use today, and a few new ones too.”
Dr. Gabriele Garzoglio is a Technical Program Manager with the Scientific Computing Division at Fermilab. He has been head of departments, such as Scientific Data Processing Solutions and Grid and Cloud Services, overseeing the development and operations of distributed software infrastructure for the job and data handling needs of Fermilab stakeholders. Among Dr. Garzoglio’s responsibilities, one top priority is the management of the Fermilab HEP Cloud Facility Project. The project develops a virtual facility to provide a common interface to a variety of physical computing resources including local clusters, grids, high performance computers and community and commercial clouds. Dr. Garzoglio received his PhD in Computer Science from DePaul University in 2006 and his Laura Degree (Master of Science) in Physics from University of Genova, Italy, in 1996.