Sun’s Marc Hamilton is writing on his blog today about virtualization and its use (or otherwise) in HPC. He points out the obvious — that we don’t use virtualization very often in HPC — and offers two reasons for that.
First, that the primary focus of virtualization vendors and advocates has been for the conslidation of services previously hosted on individual servers onto a single platform, a world view at odds with the fact that most users want every bit of their server and 1,000 others devoted to their app.
The other reason he gives is a perceived HPC-predisposition to open source
However, in the HPC space, you see precious little use of either Hyper-V or VMware, at any price. HPC researchers, long at the forefront of open source, have chosen instead to focus on open source virtualization platforms.
I disagree with this as a motivation, but this is a tangent to his thesis. In my world (HPC in the DoD) we absolutely aren’t motivated or de-motivated by open source. For us the lack of virtualization goes back to the performance argument. We simply haven’t needed it.
He does go on to suggest a motivation for the potential growth of virtualization in HPC
With the Sun xVM Server, HPC centers small and large will be able to manage their cluster with xVM Ops Center, and use xVM Server to virtualize Linux, Solaris, and Windows guest operating systems running on their cluster. No more long committee meetings deciding which Linux kernel to run or how many hours a year that Microsoft funded researcher can get access to the cluster to run Windows. For that matter, advanced programmers writing multi-threaded code with OpenSolaris can run on the cluster too with xVM.
That potential use I’ll agree with, at least in part. It will give center operators more flexibility in providing services and it may in some cases provide users with more choice, at least on the requirements fringe.
What do insideHPC readers think? Is there a real future for virtualization in HPC?