Are GPUs going to become a common utility? NVIDIA’s Steve Wildstrom blogs that CUDA is coming to the Cloud:
For the most part, cloud service focus on plain-vanilla Linux or Windows servers. Since they are generally run as headless servers accessed only through a remote desktop, they featured little if anything in the way of GPU capabilities. But add a high-end GPU—or several– and drive that hardware with general purpose programming on GPU techniques such as CUDA and you can quickly get a low-cost on-demand supercomputer for big, computationally intense jobs without the capital expense and administrative complexity of running your own high performance system.
Wildstrom goes on to say that bandwidth constraints can make it impractical to move vast input and output data sets sometimes used in HPC over the Internet. For moving more than 250 GB of data, he writes that Penguin’s POD offers a disk caddy service that moves information in 2 TB chunks via overnight air shipping.
Shipping? Data shouldn’t require a delivery van. I’m thinking these guys should be working with VCollab, whose 3D CAE compression technologies might be able to shrink that data whale enough to fit in an email attachment. This could be a match made in the Cloud.