Over the past several years, virtualization has made major inroads into enterprise IT infrastructures. And now it is moving into the realm of high performance computing (HPC), especially for such compute intensive applications as electronic design automation (EDA), life sciences, financial services and digital media entertainment. This article is the first in a series that explores the benefits the HPC community can achieve by adopting proven virtualization and cloud technologies.
With the rise of manycore processors, double-dense blade form factors, and wider and deeper cabinets, the size and density of HPC systems have grown more than 300 percent since 1999. This high density of “heat offenders” requires a much-more efficient method of temperature control than is possible with air cooling. And while liquid cooling is generally more efficient, not all liquid alternatives are created equal.
In late 2010 and throughout 2011, however, we noticed a shift in the HPC market as new workloads such as digital media, various financial services applications, new life sciences applications, on-demand cloud computing services and analytics workloads made their way onto HPC servers. We are now seeing another new trend developing in the HPC space with the introduction of ultra-dense servers.
Everything from life sciences to the financial industry are relying on HPC clusters to perform complex and critical operations. Moving forward, there will be a lot more reliance on various HPC systems. So the all-important question comes in – How do you select, deploy and manage it all? Fortunately, IBM, Intel and NCAR have teamed up to explain their view on best practices selecting an HPC cluster using the process behind building the NCAR Wyoming Supercomputing Center.
“One of the hottest topics we see is remote visualization for post-processing simulation results. One big issue in traditional workflows in technical and scientific computing is the transfer of large amounts of data from where these have been created to where they are analyzed. Streamlining this workflow by processing where the data have been created in the first place is tantamount to shorten the wall-clock time it takes end users to get final results. At the same time, hardware utilization is greatly enhanced by using innovative technology for remote 3D visualization. For this, we have long since entered into a strategic partnership with NICE.”
Although many initially thought that liquid and servers should probably never mix – what if the server cooling is done in a completely controlled and secured environment? Liquid submersion cooling has the potential to revolutionize the design, construction, and energy consumption of data centers around the world.