Virtualization in HPC

Print Friendly, PDF & Email

Sun’s Marc Hamilton is writing on his blog today about virtualization and its use (or otherwise) in HPC. He points out the obvious — that we don’t use virtualization very often in HPC — and offers two reasons for that.

First, that the primary focus of virtualization vendors and advocates has been for the conslidation of services previously hosted on individual servers onto a single platform, a world view at odds with the fact that most users want every bit of their server and 1,000 others devoted to their app.

The other reason he gives is a perceived HPC-predisposition to open source

However, in the HPC space, you see precious little use of either Hyper-V or VMware, at any price. HPC researchers, long at the forefront of open source, have chosen instead to focus on open source virtualization platforms.

I disagree with this as a motivation, but this is a tangent to his thesis. In my world (HPC in the DoD) we absolutely aren’t motivated or de-motivated by open source. For us the lack of virtualization goes back to the performance argument. We simply haven’t needed it.

He does go on to suggest a motivation for the potential growth of virtualization in HPC

With the Sun xVM Server, HPC centers small and large will be able to manage their cluster with xVM Ops Center, and use xVM Server to virtualize Linux, Solaris, and Windows guest operating systems running on their cluster. No more long committee meetings deciding which Linux kernel to run or how many hours a year that Microsoft funded researcher can get access to the cluster to run Windows. For that matter, advanced programmers writing multi-threaded code with OpenSolaris can run on the cluster too with xVM.

That potential use I’ll agree with, at least in part. It will give center operators more flexibility in providing services and it may in some cases provide users with more choice, at least on the requirements fringe.

What do insideHPC readers think? Is there a real future for virtualization in HPC?


  1. From what I’ve seen there is demand for virtualization in HPC, especially when people are considering mixing WinHPC Server 2008 and Linux/Unix where they don’t want to over dedicate their cluster to one OS. This is because it’s faster to provision a VM than by other methods like dual boot or PXE. Some of the main concerns for adoption that I’ve seen is that there is limited support for mpi & infiband drivers within VM’s. Other concerns are whether parallel codes can cope with some of the nicer features of VM’s such as checkpointing and migrating. Where virtualization can be a player is in environments where they run serial jobs and can get by with ethernet. Then there is the greatest argument of why not to use VMWare and that is the slight performance decrease in running within a VM.

    These arguments are for HPC clusters and do not hold true in traditional data centers.

  2. One thing to keep in mind is that if you’ve used an IBM with a Federation Switch, you’ve been using virtualization and probably didn’t even know it. The Power4, Power5, Power6, and soon to be Power7 ALL use VM’s.

    IBM calls them partitions, or LPAR’s, Logical Partitions, and they are controlled by a hypervisor. As far as I can tell, it’s exactly the same as doing a Virtual Machine, but, and remember I no longer work for them, IBM does a heck of a lot better job of making it transparent that any other company I have seen. In fact I don’t think there is a company out there that comes close. They took most of the ideas and technology from the mainframes and ported it to the servers.

    Need a web server farm, or database farm, etc? Just set up a bunch of Dynamic LPAR’s in a P6 maching and it can automagically move memory and processors between partitions/VM’s to balance workload. Need more servers than you have processors? No problem, you can split processors, network adapters, etc between VM’s, it’s called micropartitioning. Etc etc etc.

    So, to say we don’t use VM in HPC isn’t really correct. We do, you probably just didn’t realize it. Which is as it should be.
    (And yes John, that was a dig at being a pointy haired boss)



  3. In my own defense, Rich, I’ve never run an IBM. With the pointy haired boss comment, just felt a need to defend myself! 🙂

    But you make a good point. I guess perhaps Sun’s partitioning scheme uses a similar approach (that’s a question/statement).