Hardware virtualization refers to the creation of a number of self-contained virtual servers that are resident on the physical server, or host machine. This allows multiple applications to be run on the same machine while providing security and fault isolation. Typically an administrator decides how much of each resource — CPU, memory, net- working — to allocate to the virtual machine (VM), while assigning priorities to different classes of users. The virtual infrastructure dynamically enforces these policies to ensure that each VM gets its fair share of resources.
This paper examines aspects of computing important in HPC (compute and network bandwidth, compute and network latency, memory size and bandwidth, I/O, and so on) and how they are affected by various virtualization technologies. The benchmark results presented will illuminate areas where cloud computing, as a virtualized infrastructure, is sufficient for some workloads and inappropriate for others. In addition, it will provide a quantitative assessment of the performance differences between a sample of applications running on various hypervisors so that data-based decisions can be made for datacenter and technology adoption planning.
The combination of virtualization and cloud computing provides value to both the end users and IT providers in HPC and enterprise environments. And, once created, these private clouds can be burst to a hybrid cloud to create seamless and secure extensions of the organization’s on-premise infrastructure. Performance is the key. Read this informative guide to learn more.
Over at QNIB, Christian Kniep writes that his latest presentation examines intersection of Docker, Containerization, and Configuration Management. “In my humble opinion, Configuration Management might become a niche. As hard as it sounds.”
In this video, Matt Herreras and Josh Simons discuss recent developments in virtualization technologies for HPC. Please pay attention, folks. This stuff is going to change how we, as a community, get supercomputing done and it is happening now.
The Johns Hopkins University Applied Physics Laboratory migrated independent grids into a fully virtualized environment that reduced idle computing cycles while a providing a big jump in throughput when pushing millions of calculations through the system.
“For environments where large memory systems are critical – bio informatics, legacy databases i.e. Big Data, we have focused on a lot of performance enhancements. We strive to make large memory systems as fast as possible. It is interesting to note that in some cases, our VMs are faster than physical machines. We do this by prefetching and caching data based on our understanding of memory placement and access patterns.”
In this slidecast, Shai Fultheim from ScaleMP provides an update on the company’s recent announcements and previews vSMP 5.5 Foundation software. With the rise of in-memory analytics, the company is seeing rapid deployment growth for its server aggregation software.
In this slidecast, Matt Herreras and Josh Simons from VMware describe how Hybrid Cloud powered by virtualization offers increased scientific agility for HPC workloads. Make no mistake; virtualization is coming to HPC in a Big Way, and everyone will benefit.