DOE's cloud experiment

Print Friendly, PDF & Email

Last week Michael at HPCwire wrote an article on Magellan, the DOE experiment to see how cloud computing might apply in its environment. The $32 million program will split an investment between Argonne’s LCF and NERSC.

One of the major questions the study hopes to answer is how well the DOE’s mid-range scientific workloads match up with various cloud architectures and how those architectures could be optimized for HPC applications. Today most public clouds lack the network performance, as well as CPU and memory capacities to handle many HPC codes. The software environment in public clouds also can be at odds with HPC, since little effort has been made to optimize computational performance at the application level. Purpose-built HPC clouds may be the answer, and much of the Magellan effort will be focused on developing these private “science clouds.”

…The entire range of DOE scientific codes will be looked at, including energy research, climate modeling, bioinformatics, physics codes, applied math, and computer science research. But the focus will be on those codes that are typically run on HPC capacity clusters, which represent much of the computing infrastructure at DOE labs today.

More in the article, which is worth a read if you are at all interested in the potential of clouds in HPC.

Trackbacks

  1. […] written about the DOE’s Magellan clouds-for-science experiment before (here and here). It’s not a new thing, but Federal Computing Week is talking about some early results “For […]