Power efficient servers, virtualization, and green HPC

Print Friendly, PDF & Email

We’ve been talking a bit here lately about the uses of virtualization in HPC, and there are strong advocates on both sides of the question. The general roar of the IT press though is in virtualization’s favor, which is why I was pleased to see this counter-study at SearchDataCenter.com

While many IT pros use virtualization to avert server sprawl and keep power costs low, others lack the resources to or resist going virtual because of performance overhead, however low it may be. One such company is London-based Last.fm LTd., a large and fast-growing social networking and free music-sharing website. The company uses open source OpenVZ virtualization in its testing and development environment but has said no way to production-level virtualization.

…”Server consolidation is the reason a lot of companies adopt virtualization, so if you don’t have a utilization issue, that certainly eliminates the major reason to adopt it,” said Haff. “If you have HPC [high-performance computing] or other types of Web 2.0 and grid environments where you are running applications across a large number of similar systems, you see virtualization being used, but it certainly not the low-hanging fruit.”

So what’s a mother to do when excess capacity is needed but you don’t have a bunch of servers with spare cycles? Well, add more servers of course. But eventually you’re going to run into power issues, which is why it may make sense to start rolling out older servers for new ones solely on the basis of power efficiency.

Last.fm now has two chassis of four-socket Sun x6450 blade servers running Intel’s six-core processors. With two chassis in a rack, Last.fm installed a total of 20 blades running on a 32-amp supply. These blades, used as Web servers, also take up less and have more computing cores — 240 cores per chassis with 480 cores in the rack, Jones said.

“Previously we were using 1U servers, which were dual-quad-core CPUs, and we could get 28 of them in a 32A supply (or one rack). So we went from 224 cores – 28 machines times eight cores – to 480 in the same space and power,” Jones said