Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


VMware moves Virtualized HPC Forward at SC17

In this video from SC17, and Martin Yip and Josh Simons from VMware describe how the company is moving Virtualized HPC forward.

In recent years, virtualization has started making major inroads into the realm of High Performance Computing, an area that was previously considered off-limits. In application areas such as life sciences, electronic design automation, financial services, Big Data, and digital media, people are discovering that there are benefits to running a virtualized infrastructure that are similar to those experienced by enterprise applications, but also unique to HPC.”

Back in August, VMware introduced vSphere Scale-Out Edition for Big Data and HPC Workloads, a new solution in the vSphere product line aimed at Big Data and HPC workloads. VMware vSphere Scale-Out edition includes the features and functions most useful to Big Data and HPC workloads such as those provided by the core vSphere hypervisor and the vSphere Distributed Switch.

By virtualizing these workloads with vSphere Scale-Out, customers can benefit from:

  • Dramatic Resource Optimization — Memory and CPU utilization optimization in a virtualized environment can increase performance significantly over a physical system. VMware tests of Big Data workloads have shown that virtualized Spark cluster performance can exceed physical cluster performance by up to 10 percent.
  • Simplified Compute Node Creation — Adding more capacity to a Big Data cluster can be done by cloning VMs and giving them an identity. Clusters can be scaled up and down as needed.
  • Network Flexibility — Widely distributed systems, like most Big Data clusters, require the management of many nodes using a common central point of control across the network, which vSphere delivers through the Distributed Switch.

Over the last decade and a half, virtualization has grown from a niche technology confined for use in test/dev environments to the de facto platform on which practically all enterprise data center workloads are run. It is also the foundation on which cloud computing is built. Better hardware utilization, reduced CapEx and OpEx, improved business agility, enhanced business continuity and security—these benefits are by now well understood and enjoyed widely.

Performance of Virtualized HPC

If virtualization is to be used in HPC or Technical Computing environments, applications must run well when virtualized. Due to advances in the sophistication of virtualization software as well as significant and continued development of hardware support for virtualization, performance degradations due to virtualization have been much reduced from earlier versions of vSphere.

Throughput applications—single-process, possibly multi-threaded jobs—have been found to run on vSphere with well under 5% degradation, often just 1-2%. When many instances are run in parallel on a cluster or grid, job throughput can sometimes be higher in the virtualized environment. These results are consistent across a range of disciplines, including life sciences, digital content creation, electronic design automation, and finance. The chart below shows performance results for a representative throughput workload, the BioPerf benchmark suite.

Performance of the BioPerf benchmark suite, showing the ratio of virtual to native performance. Higher is better, with 1.0 indicating that virtual performance is the same as native. HP DL380p G8 server, dual Intel Xeon E5-2667v2 processors, 128GB, VMware ESX 6.0.

To learn more, check out the technical white paper: Virtualizing HPC and Technical Computing with VMware vSphere.

See our complete coverage of SC17

Sign up for our insideHPC Newsletter

Leave a Comment

*

Resource Links: