Google running Belgian datacenter without chillers

Print Friendly, PDF & Email

According to an article by Rich Miller over at Data Center Knowledge Google has opened a datacenter in Belgium that runs without chillers.

We talk to datacenter managers in HPC who are doing this part of the time (basically when weather permits or in parts of the world that are habitually cool) in our Green HPC podcast series. This strategy is called outside air economization (for a recent example see this article about Pete Beckman’s work at ANL). But Google has taken this to its logical conclusion

Rather than using chillers part-time, the company has eliminated them entirely in its data center near Saint-Ghislain, Belgium, which began operating in late 2008 and also features an on-site water purification facility that allows it to use water from a nearby industrial canal rather than a municipal water utility.

The climate in Belgium will support free cooling almost year-round, according to Google engineers, with temperatures rising above the acceptable range for free cooling about seven days per year on average. The average temperature in Brussels during summer reaches 66 to 71 degrees, while Google maintains its data centers at temperatures above 80 degrees.

What happens if it gets hot in Belgium? In that case the advantages of being the largest computing provider on the planet become evident

On those days, Google says it will turn off equipment as needed in Belgium and shift computing load to other data centers. This approach is made possible by the scope of the company’s global network of data centers, which provide the ability to shift an entire data center’s workload to other facilities.

This is a remarkable feat of software engineering.

“You have to have integration with everything right from the chillers down all the way to the CPU,” said Gill, Google’s Senior Manager of Engineering and Architecture. “Sometimes, there’s a temperature excursion, and you might want to do a quick load-shedding to prevent a temperature excursion because, hey, you have a data center with no chillers. You want to move some load off. You want to cut some CPUs and some of the processes in RAM.”

We could actually do this with supercomputing services on a national scale. If the United States were to decide and then behave as if HPC were a strategic resource that needed to be managed with a single coherent strategy, it is large and diverse enough to build datacenters around the country in areas of beneficial climate near cheap and/or environmentally friendly power sources. Of course this would involve datacenter owners and mangers giving up on the notion of being collocated with their machines. This is a tradition that is rapidly crossing out of the realm of quaint expression of an owner’s prerogative to give VIP tours and into the realm of misuse of financial, energy, and natural resources.

Comments

  1. I doubt that the US taxpayer would want us idling a $300,000,000 computer to save $5,000,000 in cooling. It’s not nearly as simple an issue as you make it out to be.

    If all (or even most) large-scale HPC applications were as distributable as google’s applications, you might have a point, but (in my experience) they’re not.

  2. dmr – Most of the government’s many HPC centers aren’t hosting $300M computers; the large ones are hosting $15-$50M computers, with a few outliers, which makes the power bill a much larger fraction of the acquisition price (its not uncommon to talk to directors with $1M power bills and $25M in computers). Under conceivable circumstances the energy landscape could change to make that power bill would be worth managing, either because of changes in the law or changes in the demand structure in that result in the placement of hard caps. Demand shedding to manage load on the local utility and/or minimize cost makes great sense to me as part of a national infrastructure for HPC that produces results in a way that minimizes all the relevant costs to the taxpayer.

    And with the right infrastructure, some of which already exists (Evergrid developed some of this stuff in some early demos I saw several years ago) you could indeed dump the state of running jobs out to disk without modifying the application source and transport it. Of course you could do this with a VM today as well, but the claim of the Evergrid folks was that (at the time) their software had less performance impact. Not sure how that has evolved in the intervening couple of years.

    I don’t claim it is a simple issue; I do claim that the US government has not put any teeth in its repeated rhetoric calling for management of the country’s computing infrastructure as a national resource. When/if it does decide to get serious about it, then demand shedding across a centrally managed network of computational capacity will be a cost management opportunity worth considering.

  3. Martin Antony Walker says

    I was asked by the European Commission to make some provocative remarks to open a session on “ICT Infrastructures for Science: Virtualising Global Research” at the EC event ICT 2008 last November in Lyon, France. One of the remarks was that economies of scale and green economics suggest putting all the IT infrastructure needed to support scientific research in the European Research Area (ERA) in a huge data center in Iceland. This did not go down well with supercomputer center people, some of whom apparently see the advent of cloud computing as an existential threat. They argue that people who fund supercomputers will want to be able to see and touch what is being bought, within their jurisdiction.

    Your suggestion for the US is essentially similar. It will be interesting to see how soon this rational way of thinking about provisioning IT for science becomes broadly acceptable.