While many of us old guard HPC codgers may remember a time when Microsoft was making a big push into HPC some years ago, the company’s current HPC strategy is pretty much non-existent from what I can tell. They do have a solid “Big Compute” strategy, however, and when I came across their Azure site today, I learned a thing or two.
Big Compute refers to a wide range of tools and approaches to run large-scale applications for business, science, and engineering by using a large amount of CPU and memory resources in a coordinated way. For example, typical Big Compute applications perform complex modeling, simulation, and analysis, sometimes for a period of many hours or days, and they might run on a cluster of on-premises computers, on compute resources in the cloud, or on a combination of the two. The essence of Big Compute is distributing application logic to run on many computers or virtual machines at the same time, in parallel, to solve problems faster than a single computer can. A Big Compute solution provides the necessary compute resources, infrastructure, management and scheduling tools, and workflow to run Big Compute applications efficiently.
Ok, so “Azure offers organizations scalable, on-demand compute capabilities and services, enabling them to solve compute-intensive problems and make decisions with the resources, scale, and schedule they require.” Sounds like HPC as service to me. Can’t we just call a spade a spade?
In fact, now that we’ve defined our terms, Microsoft, I wish you luck with what looks to be good, solid technology for technical computing. I’ve been in this industry my entire career and the only advice I could offer is to just please hire somebody who speaks English.
Sign up for our insideHPC Newsletter.