HPC and virtualization

Print Friendly, PDF & Email

Josh Simons at Sun recently took a trip to ORNL to talk with their System Research Team about work that both organizations are doing with respect to virtualization in HPC. His full trip report is an interesting read.

Uses for virtualization in HPC? From Josh’s report

[Resiliency] In addition, clusters are getting larger. Much larger, even with fatter nodes. Which means more frequent hardware failures. Bad news for MPI, the world’s most brittle programming model. Certainly, some more modern programming models would be welcome, but in the meantime what can be done to keep these jobs running longer in the presence of continual hardware failures?

[Scaling] Among them, the use of multiple virtual machines per physical node to simulate a much larger cluster for demonstrating an application’s basic scaling capabilities in advance of being allowed access to a real, full-scale (and expensive) compute resource.

[System-level portability] Geoffroy also spoke about “adapting systems to applications, not applications to systems” by which he meant that virtualization allows an application user to bundle their application into a virtual machine instance with any other required software, regardless of the “supported” software environment available on a site’s compute resource.

[Observability] Observability was another simpatico area of discussion. DTrace has taken low-cost, fine-grained observability to new heights (new depths, actually). Similarly, SRT is looking at how one might add dynamic instrumentation at the hypervisor level to offer a clearer view of where overhead is occurring within a virtualized environment to promote user understanding and also offer a debugging capability for developers.

Performance addicts — gotta have that last op…GOTTA HAVE IT!! — will pooh pooh the idea of losing cycles to the VM. I can already hear them gearing up to point out my foolish carelessness with their hard-earned MADD. My own personal bias is that this point of view is irrelevant to the point of irresponsibility.

We already let between 80 and 99% of a machine’s available capability fall on the floor, and spend thousands of man hours trying to do as much as possible with the few percentage points we can get. In doing so we have totally ravaged the concept that people (with their ability to reason) are more valuable than machines. Time spent in making supercomputing more usable and more accessible, on everything from more usable programming tools to interfaces that support the end user, will bring more people into HPC, who in turn will make HPC better and use HPC to make more areas of everyday life better. This is the path to realizing the promise of HPC in the next two decades: not 10, 100, and 1,000 PF machines.

So, I think VMs (and UIs and IDEs and APIs) are relevant to the degree to which they support the creation of environments that allow users and programmers to worry more about the task they are accomplishing than the tool they use to accomplish it. 1 or 2 FLOPS be damned. There, I said it.

Oh, Josh’s post ends with a VM in HPC reading list. Check it out.

Trackbacks

  1. […] efforts to build a supercomputer that will essentially be a graphics rendering cloud. Today, insideHPC alerted me to a pretty cool post from Josh Simons over at Sun Microsystems about his  trip to the Oak Ridge […]

Comments

  1. Jeff Layton says

    John – sorry but I have to comment about the performance piece. 🙂

    I like the concept of virtualization and I agree with most everything Oak Ridge said (haven’t read the trip report yet). Great ideas and it would make life easier on so many fronts, including the idea of moving or replicating VM’s on the fly. Laudable goals and I would love to see things move along.

    But, currently, the performance when running on VM’s really isn’t great. I’m not talking about a couple of percent. The last tests I’ve seen indicate that you take a network performance hit on the order of 20-30% when running in a VM. That’s pretty severe.

    Second – I know of one experiment where a single node was running a code doing some local IO and moved to another node. It was running a very popular cluster package along with Xen. It took about 30 minutes to move the application from one node to another. This case isn’t as strange as you would think. Writing to centralized storage is a good thing, but for some applications, it’s actually better to write to local storage.

    Third – one thing that is done with HPC applications to get better performance and more predicatable performance is the idea of pinning the process to a core. But VMWare specifically says not to do this because of various problems in moving a process from one node to another. While VMWare isn’t all VM’s today 🙂 they have a great deal of experience.

    Like 10GigE, I would love to see these problems and others solved to allow virtualization in HPCC. There are just some problems that need to be overcome today.

    BTW – here’s a blog I wrote a while ago about the concept.

    http://www.delltechcenter.com/page/6-02-2008+-+Whither+Virtualization+in+HPCC+-+Comments

    I don’t know if it helps or not 🙂

  2. John,

    Ironic that this is twice in a week where you have posted an article that has great relevance to the situation at my site.

    Last week we had a Dell Rep telling us Infiniband was on it’s way out and we needed to seriously look at moving the data center to 10 Gig E in the future. (replacing all of our existing infrastructure of course).

    And now the discussion on Virtualization. It’s a big topic here at our site and the management staff has drunk the VM koolade. Ignore the man behind the curtain, it will all work as advertised. It’s fast, it’s reliable, it’ll make you breakfast in the morning. It fits all situations in the data center. (-:

    VM has it’s place in the data center surely, probably lots of places. But the hype being sold us is almost funny. It will be interesting to see what plays out. I’ll continue to read the various links and consolidate some information. Maybe I’ll have a better informed comment later as opposed to just an observational one.

    Just thought it was funny that insideHPC nailed two hot topics for us in less than a week.

    Cheers.

    Rich

  3. Jeff (and Rich) –

    So, I’m being a little facetious (but only a little) when I say “FLOPS be damned.” Clearly, 30% performance degradation is a big deal. I guess my real point is that we have big usability issues in HPC that we continue to ignore while we strive after FLOPS. 20 years ago this may have been appropriate, but today I think the situation is analogous to continuing to tweak the engine in a car we’ve been driving for 20 years with no seat. Let’s put in a seat.

    I realize that you aren’t arguing against my point, but rather observing that there are real problems to be overcome. I agree. I’m just afraid that no one is going to take the time to overcome them because they lay off the path to higher immediate performance.

  4. Jeff Layton says

    John,

    Excellent comments. I totally agree. I guess we are all speed junkies and we need to get off the crack to make our lives better 🙂 But it doesn’t help when government agencies that fund research and machines focus totally on performance and price as the measures of either funding the research proposal or selecting a vendor to build a machine. For example, I wish NSF would start funding open-source projects to solve some of these problems.

    Keep the blogs and ideas coming – I like to see things shaken up 🙂

    Thanks!

  5. Ref: “big usability issue in HPC”

    I’m unclear how VMs are perceived as a solution here. There is research in this area, but it is perceived as a HPC language issue, i.e. what are the best abstractions we can provide to users to easily, effectively, and robustly express a wide range of problems on highly-parallel machines?

    I’m all for increasing funding in this much neglected area (although DARPA HPCS did help a bit).

    I’ll be careful about applying the VM koolaide as a solution to this critical issue though. I think the benefits of VMs are really in the data center management arena.

    IMHO

  6. Ed: as to how VMs aid in usability in HPC…my peak usage by computational hours (diverse HPTC population in an R&D program) is at 2048 cores. For my population this number grows as a fraction of the total number cores available on the machine. At this core count users are experiencing job interrupts due to hardware failures, and this phenomenon has gotten worse/will continue to get worse over time. The ability to be proactive, even for a small class of failures, and pack up a job from a failing processor and move it in order to preserve the computation (which may run days) is a usability issue potentially addressed by VMs. And by other technologies. And obviously not without performance penalties.

    There are language solutions to these problems, but there are not only language solutions, and the best solution (from a usability perspective) is one that requires no change by the user.

  7. John, thanks for the post.

    I’m particularly interested in the “System-level portability” usage of VMs in HPC that you mentioned. A great use of this concept is Globus’ Nimbus toolkit [1] and its usage to create Science Clouds [2]. This would lower the barrier to getting more applications into a cloud or on a computational grid.

    Deploying an application’s required environment as a VM to remote clusters in a cloud or computational grid will save time both on the part of the application developer and user (fewer wrapper scripts and conditionals and job restarts due to mismatched environments) as well as the time needed from support personnel and system administrators at the local and remote sites (where support tickets are opened and servers are tweaked to accommodate new applications).

    Perhaps if those savings can be quantified it can justify the purchase of additional hardware to offset any loss in performance that results from VM usage.

    [1] http://workspace.globus.org/
    [2] http://workspace.globus.org/clouds/