Sign up for our newsletter and get the latest HPC news and analysis.

Automatically adapt cooling with stuff you (mostly) already have

Ted Samson at InfoWorld’s Sustainable IT blog writes today about the advantages of automatically adapting cooling in your datacenter to meet actual (not theoretical) load without totally retooling your facility

However, we live in the real world where the pizza gets cold while datacenter admins have to put out (preferably figurative) fires, and datacenter operators waste precious electricity and thousands of dollars — if not tens or hundreds of thousands of dollars — creating unnecessarily chilly meat-locker-like conditions in their datacenters. Sure, tools do exist for better regulating temperature on a rack-by-rack basis, such as sophisticated sensor-based offerings from companies such as HP and SynapSense. However, not all datacenter operators have the budget or the level of need to justify investing in that sort of additional technology hardware.

Fortunately, datacenter operators may have just about everything they need to automatically optimize cooling in real time using the IT and cooling equipment they already own. Such is the outcome of a recent project by Intel, IBM, HP, Emerson, and Lawrence Berkeley National Labs called Advanced Cooling Environment (ACE). Using existing sensor technology built into the servers, the organizations devised a way for servers to communicate their cooling needs on a granular basis to existing CRAHs (computer-room air handlers) to automatically adjust their output.

The project links the temperature sensors in the servers to the control systems of the air handlers (with some translation in between) so that the air handlers could adjust fan speed and air temperature to meet the actual demand of the servers in the room.

The test environment was of limited size, but the project team determined that potential fan energy savings were as high as 90 percent for particular CRAHs in the test. That’s not bad, considering that datacenters are known to spend as much as $1 to cool datacenters for every dollar they spend running IT gear.

More in the article. What I like about this is that it is a realistic approach to managing energy costs. In my organization the IT and facility funding are still separate, and we don’t have much incentive or funding to go out and retool the existing facilities to allow for better energy management (although we are pushing pretty hard whenever we build new or retrofit existing computer space).

Resource Links: