What is virtualization?

Virtualization is a technique that allows multiple operating systems to run simultaneously on the same hardware. Consider, for example, a fault-tolerant server in which several instances of Linux are executing. If a fault happens to wipe out one of those instances, the rest keep running unabated. In this regard, virtualization is to operating systems what multiprocess environments are to user applications.

One approach to achieve this is by emulating ALL of the underlying hardware. Such is the scheme followed by VMware and Virtual PC. This allows operating systems to run unmodified, but at an enormous performance cost.

An alternative approach is to instead provide an API that a operating system writes to. This is the method that Xen follows. The OS must be modified, but runs at near-native performance.

Xen, VMware, and Virtual PC are all examples of “virtual machine monitors” (VMMs). They are the underlying software for the operating systems.

CPUs normally have a standard user mode and a privileged kernel mode. This later mode allows operating systems to act as supervisors of the user applications. For security, the virtual machine must run at a more privileged mode than that of the operating systems; this allows it to be a “hypervisor” to the OS.

It is this difference in protection modes, among other reasons, that requires the operating system to be ported to a hypervisor like Xen. That seems fine for open source software, but it is not feasible for many commercial OSes.

To solve this problem, Intel and AMD have announced hardware support in their CPUs for virtualization. Intel’s product is “VT-x,” short for “Vanderpool Technology for x86.” AMD’s is simply “Pacifica.” These chips allow an operating system to run unmodified and at native speeds on a VMM.

There has been much excitement about virtualization recently, particularly with regard to Xen. If the technology catches on, it may become common practice among data centers to run multiple operating systems simultaneously.