“100% Flash in the Datacenter? It won’t happen any time soon. Many (most?) tier one workloads will be moved to flash of course, but data is adding up so quickly that it’s highly unlikely you will be seeing a 100% datacenter any time soon. It will take a few years to have about 10/20% of data stored on flash and the rest will remain on huge hard disks (cheap 10+TB hard disks will soon be broadly available for example).”
In this slidecast, Christian Kniep presents: QNIBTerminal Plus InfiniBand – Containerized MPI Workloads. “QNIB Solutions (early on called ‘QNIB Inc’) derives from the first project Christian did during his B.Sc. report, a InfiniBand monitoring suite. For the sake of the report is was named ‘OpenIBPM: Open Source InfiniBand Performance Monitoring’. Afterwards Christian renamed it to match his last name (Kniep): ‘QNIB: Qualified Networking with InfiniBand’. Since then QNIB becomes a pet project’s theme.”
Both large-scale environments and scale-out workloads (such as Big Data) are becoming more important in the enterprise. In fact, with the rise of Big Data, the advent of affordable, powerful clusters, and strategies that take advantage of commodity systems for scale-out applications, these days the enterprise computing environment is looking a lot like HPC.
One of the best ways to realize the full performance benefits of virtualization is to make it available through a private cloud. The VMware vCloud Suite realizes operational efficiency through policy-driven operations. By providing simplified operations management, the cloud solution drives greater resource utilization and staff productivity.
The software defined data center is the underlying data center architecture that allows most IT infrastructure to be defined in software and to function as enterprise-wide resources. This approach enables ITaaS to be delivered in a virtualized environment with greater agility, speed and quality of service.
Virtualization allows workloads to be compartmentalized in their own VM in order to take full advantage of the underlying parallelism of today’s multicore, heterogeneous HPC systems without compromising security. This approach is particularly beneficial for organizations centralizing multiple groups on to a shared cluster or for teams with security issues – for example, a life sciences environment where access to genomic data needs to be restricted to specific researchers.
This article is the third in an editorial series that explores the benefits the HPC community can achieve by adopting HPC virtualization and secure private cloud technologies. Virtualization has been proven to be a viable architectural approach that addresses the many challenges mentioned in last week’s article. This week and next we look at the benefits of creating a virtualized infrastructure.