I thought that this would probably start in 18 months or so from the bottom up as the small manufacturing and engineering firms out there with HPC needs looked for someone to host their gear. But it looks like hosted HPC is making its way to large scale science first.
No, not Network.com. Remember back in summer of ’07 when Google and IBM announced they were teaming up to provide hardware and expertise to train the next generation of computational scientists? Right.
Well, the NSF has just announced a strategic partnership wherein the Google-IBM system will be made available for broader scientific research.
“Access to the Google-IBM academic cluster via the CluE program will provide the academic community with the opportunity to do research in data-intensive computing and to explore powerful new applications,” Wing said. “It can also serve as a tool for educating the next generation of scientists and engineers.”
Right…isn’t that what the whole cyberinfrastructure program is about? I suppose you can argue that this project includes “internet scale software” rather than just scientific software, and that’s what differentiates it. The Google-IBM project materials float between references to classical parallel processing and Web 2.0 language.
The system described is small: only “approximately 1600 processors.” Tiny by Ranger standards. Google and IBM are providing the resources free of charge to the NSF (hint: only the first bag of crack is free; you pay for the rest).
The CISE directorate will solicit proposals and run a competitive allocation process. And Dr. Wing says she’s open for more
According to Wing, NSF hopes the relationship may provide a blueprint for future collaborations between the academic computing research community and private industry. “We welcome any comparable offers from industry that offer the same potential for transformative research outcomes,” Wing said.
You could argue that this isn’t a sign of anything to come: that NSF is only being wise in grabbing any free resource it can get its hands on. My bet is that this relationship is going to turn out very well, and NSF is going to like being able to use resources without having to worry about whether they will be deployed professionally and run reliably.
You can even argue that its a better use of taxpayer money and limited research dollars to outsource the massive infrastructure and narrow expertise required for hosting and let the taxpayer fund innovations farther up the value chain (like a programming model that can get more than 5% of peak on general calculations).
Yeah, you could argue that. And once you do, you have a business model.