Windows Azure's First Year: Can New Facilities Turn Cloud into a Supercomputer?

Microsoft’s Windows Azure launched a year ago this week, so Mary Jo Foley at ZDNet wrote up a post about two new facilities designed to bring Azure closer to its goal of providing a cloud-based supercomputer.

According to Bill Hilf, General Manager of the Technical Computing Group at Microsoft, Azure has a two-pronged strategy to bring HPC together with Azure: the Dryad distributed computing software framework (Microsoft’s competitor to Google’s Map Reduce) and the Parallelization Stack (a set runtimes, languages, and other parallel/multicore tools and technolgies that Microsoft has been building for the past couple of years).

It’s the combination of these technologies — including Dryad and the parallel stack — which will enable Microsoft to create an abstraction layer that will allow users to access compute resources — whether they’re on multicore PCs, servers and/or the cloud, Hilf said. The customers most likely to benefit are the real data wonks — the folks that Hilf calls “domain specialists.” These are the individuals in financial services, manufacturing, oil and gas, media, the hard sciences, and other data-intensive professions who have an insatiable appetite for data.

To me, the biggest challenge that Microsoft faces is the complexity of its story. It seems like their value proposition is unlimited data-access and processing power, but the devil is in the details.