An article yesterday by Ted Samson at InfoWorld talks about power capping and its role in getting the most out of the power you already have, rather than planning for the nameplate maximum draw (rarely seen) or hoping you never cross some draw less than that
As the name implies, power capping refers to the practice of limiting how much electricity a server can consume.
…Let’s say you have a max power envelope of 1MW. For the sake of argument, let’s say 400,000 watts of that megawatt goes to power, cooling, storage, and networking equipment, which leaves 600,000 watts to allocate to your servers. You decide to stick to the power allocation printed on the nameplates of your machines, which is 400W. That means that your budget allows 1,500 1U servers in your datacenter.
But what if, in reality, your servers never need more than an average 300 watts of power to maintain their required performance level? If there was a way to ensure you didn’t exceed your 1MW power limit, you could pack 2,000 1U servers into the same amount of space — with little to no need to add power and cooling infrastructure.
That’s where the power capping comes in. With power capping and complementary management software, you could ensure that no server draws more than 300 watts at once. Some companies, such as Intel, have developed power capping technology that can be applied at the rack level.
More in the article, including a discussion of Intel’s Dynamic Power Node Manager Technology and some actual results. This is one of those technology vectors that may impact our operations in HPC, and it’s one of the things we’ll touch on in a future episode of the green HPC podcast series running now.