In this Planet HPC article, Gillian Law interviews UK HPC users about their plans for the cloud.
Our problem with the classic cloud is that we have a balance problem,” said Peter Maccallum, Cancer Research UK’s Head of IT and Scientific Computing in Cambridge. “For every CPU hour that we need, we also need several hundred gigabytes of storage close to the compute. We tend to move a lot of data in, get it processed, and get the results back. So we’re paying storage costs while it’s in the cloud, and we’re paying to move the data in and out. When you’re talking about terabyte volumes, those costs start to stack up.”
For the Banking industry, an entirely difference perpective comes to light:
In a Monte Carlo simulation, to get one decimal place more accuracy, you need ten times more simulation,” said Adam Vile, Head of Technical Consulting at Excelian. “We’re in the middle of a survey of the computer requirements in investment banks, and in some cases, the number of cores in place is upwards of 100,000. There are challenges in managing that level of resource.”
The Planet HPC blog has interesting mission: Setting the R&D Roadmap for HPC in Europe. It’s well-worth checking out. Read the Full Story.