Over at the new ScaleMP Blog, CEO Shai Fultheim writes that, as the definition of a “supercomputer” continues to evolve, virtualization is the key to making HPC accessible in the real world.
In my view, “Supercomputer” is all about one computer – with shared memory and I/O subsystems. It is not about “super-large-collection-of-small-machines-that-I-need-to-program-and-administer-individually”. Enter the ability to create a single system (that is the critical element) from a bunch of these smaller servers, eliminating the complexities of a cluster. By using virtualization – or in other words: software to replace propriety hardware – a Supercomputer can be created from smaller pieces. Due to the power of virtualization, developers and users will not any idea of what is happening under the hood. Simpler to program, simpler to run, and flexible. You can create a Supercomputer as you need one, no need to fork out to hundreds of little machines to run your code, or shell out millions of dollars for a monster.
Read the Full Story.