Green HPC Podcast Series

Green HPC: a look beyond the hype

An exclusive six-part podcast series from that examines green initiatives from all sides of the HPC ecosystem.

Episode 6 out now!

You’ve heard all the hype around green computing for the past 18 months, and now it’s starting to spill over into HPC. You can’t go to a conference or trade show in HPC these days without seeing “green” in just about everyone’s booth.

But is there more to it than just marketing hype? Supercomputing isn’t email and PowerPoint it’s, well, super — the science we support has the potential to change the world. Should we get a pass on reducing our energy use? If we don’t, then what solutions are out there that pertain to us?

In this eight-part audio series, launching in June, insideHPC takes a look at the issues around green HPC from all angles. We talk to center managers, activist organizations, community leaders, and major vendors to find out what happens when HPC and green computing intersect.

Episode One: Sifting through the hype

In the inaugural episode of the Green HPC podcast series we will examine the issues that datacenter managers and system designers are facing with high performance computing systems of all sizes today. Even if you aren’t “green at heart,” there are very practical and compelling reasons why a growing awareness of energy use in your datacenter — how much, where it goes, and what it costs you — is critical to your success.

In this episode we hear from Wu-chun Feng of the Green500, Wilf Pinfold of Intel, Horst Simon of Lawrence Berkeley National Lab, and Dan Reed of Microsoft Research.

Episode details, speaker biographies, links, and more.

Episode Two: IT, HPC, and where the twain shall meet

In the second episode of the Green HPC podcast series we talk to IT and HPC industry leaders about the primary drivers for the adoption of energy aware (“green”) computing practices in IT at large, and then home in on HPC and how the customers, workloads, and solutions differ between the two.

In this episode we hear from Pat Tiernan of the Climate Savers Computing Initiative, Pete Beckman of Argonne National Lab, Ed Turkel and Steve Cumings of HP, and Christian Belady of Microsoft.

Episode details, speaker biographies, links, and more.

Episode Three: What do I get out of going green?

Despite the rhetoric, saving the environment doesn’t seem to be what motivates HPC people to go green. So what are the reasons that people have for caring about green in HPC, and in particular what do large datacenters get out of going green? Change is hard, so why are datacenter owners and managers making the change?

One of the stories that encapsulates many of these forces driving us toward green computing today comes from IBM. We talked to IBM’s Dave Turek about what the company was thinking 10 years ago when they started thinking about Blue Gene, and what lessons the success of that machine in doing lots of computations in an energy efficient way have for us today. We also revisit our conversation with Pete Beckman from Argonne’s National Leadership Computing Facility, Steve Scott at Cray, and Sumit Gupta from NVIDIA, about what their customers are looking to get out of taking green steps in HPC.

Episode details, speaker biographies, links, and more.

Episode Four: Stop pampering your processors!

The use of commodity components in HPC has made supercomputers bigger and cheaper to buy while driving their operating costs through the roof. Where once stood 100 KW supercomputers, systems of 1,000 KW and up are common. Even if you aren’t motivated to save the environment, you probably are getting a lot of pressure to reduce costs. But where to start? By not pampering your processors.

Generally datacenter managers today follow ASHRAE guidelines when deciding how to cool their machine rooms, which specify that datacenters operate between 20 and 25 degrees C, about 68 to 77 degrees Fahrenheit. But server manufacturers routinely build IT equipment to withstand temperatures up to 100 degrees F, and higher routine operating temperatures are possible with a little engineering. In this episode we talk with LBNL’s Horst Simon, Microsoft’s Christian Belady, and HP’s Steve Cumings to find out why we persist in running machine rooms as cold as meat lockers, and what we can do about it.

And if you are certain that you couldn’t possibly run 5 servers outside in a leaky tent for 9 months with 100% uptime, you’ll definitely want to listen to this episode.

Episode details, speaker biographies, links, and more.

Episode Five: Turning up the heat

Typical machine rooms today operate between 20 and 25 degrees C, about 68 to 77 degrees Fahrenheit, an operating range comes from a time when it was the people in the room that needed cooling, not the computers. And even experienced datacenter managers spend a lot of time and energy building clusters out of servers with components they don’t need, in buildings that are way cleaner than they need to be.

In this episode we talk with Argonne’s Pete Beckman and Microsoft’s Christian Belady about the specific ways in which their organizations are working with their datacenter and computer hardware vendors to improve operations efficiency. These guys are using a proactive, measurements-based approach to guide them toward more effective operations without voiding their warranties or impacting operations. And speaking of warranties, we also talk with Steve Cumings of HP to find out whether one of the world’s largest suppliers of servers is really ready — and willing — to work with customers to redefine how datacenters are run.

Episode details, speaker biographies, links, and more.

Episode 6: Green technologies of the future

In this episode we talk with companies and supercomputing centers at the forefront of thinking today about the new technologies we’ll need tomorrow. In our conversations we touch on the full spectrum of green technologies, from “bits to buildings” as Horst Simon says. On the buildings side of the spectrum we talk with our guests about local power generation and integrated approaches to work scheduling that incorporate knowledge and power rates and datacenter hot spots, integrated monitoring, and allocating user time in kW-hours instead of CPU hours. On the bits side we talk about evolutions of today’s processor architecture, the likelihood of a return to custom processors for HPC, and technologies for the rest of the computer that will provide us both the opportunity — and the challenge — to completely rethink the way we structure algorithms.

Episode details, speaker biographies, links, and more.