For the datacenter, efficient trumps pretty

Print Friendly, PDF & Email

Microsoft’s Dan Reed (with whom we have spoken for the Green HPC podcast series) has a post about the effectiveness with which datacenters make use of the energy they consume for the end goal: powering computers. Dan’s post is specifically about the PUE metric

Many legacy data centers — those built more than a few years ago – have PUEs in excess of two, or even three. This is largely due to inefficient computer room air-conditioning (CRAC) units, lack of hot and cold aisles, energy losses due to multiple (unnecessary) voltage conversions and aging or inappropriate building designs.

…Today, state of the art data centers have PUEs below 1.5, and there are new designs that could approach a PUE of one by reducing UPS support where appropriate, operating at substantially higher temperatures and exploiting ambient cooling. Many people do not realize that computing hardware is much more resilient to high temperature than history and practice would suggest. It need not be chilled to temperatures suitable for polar bears.

Indeed, Dan goes on to mention Christian Belady’s experiment last year to run a set of servers in a tent, outside, successfully, for six months. We’ve talked with Christian twice in our series so far.

But then Dan goes off into an area that I think is very relevant to the shift that must happen in the minds of HPC datacenter owners. Not the managers, or the people who work in the center, but the owners: the people who write the checks and want to give the tours. Unless you live near cheap, reliable power in an area of the country where you can take advantage of outside air economization, you should probably NOT have a big datacenter anywhere near you. This means many things, including no machine room to tour when VIPs visit you.

This is an outstanding opportunity: it means that you can focus your money on building  datacenters that work best for the machines even if they are ugly, and it means that with visitors you’ll have every reason to talk about what’s really important in supercomputing: the people with the expertise to make the machines sit up and do useful work (sys admins, architects, programmers, and more). Any rube with $50M can buy a big machine, but it takes talent to use one effectively. That’s differentiation that matters.

Finally, I would be remiss if I did not opine on the most obvious, visual difference between cloud data centers and high-performance computing (HPC) facilities. The former are designed for function, not appearance. They are usually nondescript facilities optimized for efficient hardware operation at large scale, not for human accessibility or for comfort. Indeed, container-based data centers look more like a warehouse and distribution center with parking and utility connections than Hollywood’s idea of a computing center. Conversely, HPC facilities are usually showpieces with signs, elegant packaging and lighted spaces suitable for tours by visiting dignitaries.

At large scale, efficient trumps pretty. It’s all about what one measures.