BioIT World is running a piece today called “Alternative IT Delivery Models Emerge.” The article is short and its a standard “ditch your in house stuff and buy software as a service” article. I do think that this model makes a lot of sense in more instances than people are ready to accept right now.
But its reminded me of an idea I keep coming back to.
I think that there is a real chance that HPC will move in this direction. If the cost of FLOPS continues to drop while infrastructure needs continue to rise it will soon be too expensive — and too difficult — for the large government organizations that drive the high end of HPC to keep facilities for the machines to run in (example: at my work it literally takes an act of Congress to build a new building to house a machine). An HPC host would invest $100M once every 15 years or so to build a facility large enough to support its customer base, leveraging economies of scale to build more robust centers than we can afford to build on our own, and taking advantage of private industry’s ability to spend and build with relative ease.
Something else that could drive this? We haven’t bought a machine in 5 years that worked out of the box. When you’re buying 10- and 20-k processor machines they are simply too large for the vendors to test at scale, and there is always a good bit of debugging the OS and tweaking the hardware that happens during the installation. If we move to HPC as a service — something along the lines of the hosted solutions of IBM and Sun — then this would be Someone Else’s Problem. The problem would still exist, to be sure, but it wouldn’t have to be managed by the customers and users of the supercomputer.
Do you think this is totally off the mark? Or does it make sense? Leave a comment and tell me what you think.