With the growth of big data, cloud and high performance computing, demands on data centers around the world are expanding every year. Unfortunately, these demands are coming up against significant opposition in the form of operating constraints, capital constraints, and sustainability goals. In this article, we look at 8 of these constraints and how direct-to-chip liquid cooling is solving them.
From bio-engineering and climate studies to big data and high frequency trading, HPC is playing an even greater role in today’s society. Without the power of HPC, the complex analysis and data driven decisions that are made as a result would be impossible. Because these super computers and HPC clusters are so powerful, they are expensive to cool, use massive amounts of energy, and can require a great deal of space.
The Open Compute Project Foundation was created to design the most efficient server, storage and related designs for the next generation of data centers in an open and collaborative development model. By sharing designs that maximize density, minimize power consumption and deliver expected performance, completely new computing environments can be developed, free from the limitations of legacy thinking.
In the late 1980s, genomic sequencing began to shift from wet lab work to a computationally intensive science; by end of the 1990s this trend was in full swing. The application of computer science and high performance computing (HPC) to these biological problems became the normal mode of operation for many molecular biologists.
One of the best ways to realize the full performance benefits of virtualization is to make it available through a private cloud. The VMware vCloud Suite realizes operational efficiency through policy-driven operations. By providing simplified operations management, the cloud solution drives greater resource utilization and staff productivity.
The recent release of a commercial version of the Lustre* parallel file system was big news for business data centers facing ever expanding data analysis and storage demands. Now, Lustre, the predominant high-performing file system installed in most of the supercomputer installations around the world, could be deployed to business customers in a hardened, tested, easy to manage and fully supported distribution.