But not all HPC applications are closely coupled.
“For the more traditional MPI applications there were significant slowdowns, over a factor of 10,” said Kathy Yelick, division director for the National Energy Research Scientific Computing division. NERSC is partnered on the Magellan cloud project with the Argonne National Laboratory.
This isn’t new information — we had it in a fashion back in 2008 when Walker published his paper (which I wrote about here). But Walker was using Amazon’s EC2, and NERSC is using a purpose-built cloud, so it is a valuable refinement of our picture of the cloud world as it relates to HPC.
But not all HPC jobs are so tightly coupled that inter-processor communication is a limiting factor. The DOD have whole computational areas that are gated by their ability to perform complex parameter space studies with tens or hundreds of thousands of what are essentially one processor (or one node) jobs. A time-shared HPC system set up to facilitate dozens of thousand-processor jobs often inhibits the kind of queue-stuffing that the parameter study crowd needs, and their requirements can be bursty, so exploring other alternatives is a good use of time.
Kathy’s team has identified another area of science that might benefit from a cloud environment
However, for computations that can be performed serially, such as genomics calculations, there was little or no deterioration in performance in the commercial cloud, Yelick said. Magellan directors recently set up a collaboration with the Joint Genome Institute to carry out some of the institute’s computations at the Magellan cloud testbed.