Green HPC Episode 1 Transcript

Transcript of Green HPC Episode 1: Sifting through the Hype

This is the transcript for episode one of the Green HPC podcast series, Sifting through the Hype. You can find out more about the series at the Green HPC podcast series home page, and you can listen to the audio of this episode, find out more about the speakers, and get access to links and presentations that they’ve suggested at the episode 1 homepage.


John West:

[0:00] insideHPC is a proud sponsor of SCO9, the International Conference of High Performance Computing, Networking, Storage and Analysis.

John Kirkley:

[0:08] Is your data center green or a snarky shade of brown? If you are burning up the kilowatts, come to Portland, Oregon this November for SCO9 and learn from the experts. SCO9: Going strong and going green:

Dan Reed:

[0:26] The right metric is really some ratio of effective operations to a product of watts times dollars, because in the end if you want to minimize energy consumption, just turn your computer off and you are done.


[0:53] Hello and welcome to insideHPC’s GreenHPC Podcast Series. I am John West, one of the writers at insideHPC and it is great to have you listening.

[1:01] That was Dan Reed at the top of the podcast talking about how he looks at the framing of all of the issues around energy use in high performance computing. In fact, this isn’t just his way of looking at these issues. I actually heard variations on those ideas from many of the people I talked to.

[1:17] And that brings out an important point about why we decided to do this series in the first place. Some of you know that I and the other John and Christine and everyone else who writes at the site don’t just write about and report on HPC, we spend our careers working in HPC Centers and for HPC companies. So we live this stuff everyday.

[1:37] And like a lot of you we are interested not just in HPC but in computing in general. And over the past maybe two years, but definitely in the past 12 months, it seems like there has just been more and more talk in articles and advertising around the whole idea of green computing.

[1:56] And there has been a national study by the EPA on general IT data centers and the effect that they have on the environment and the role that they play in energy consumption. And now there are a whole slew of new products and services that will make your IT greener from software that automatically turns off your printers to lower power server chips in the racks that we put in our data centers.

[2:18] And I think it is fair to say we aren’t spring chickens. Most of us are old enough to have been through several of these IT phases and we are probably more than a little jaded by them. So for a while I mostly ignored all this, partly because it smells like a fad, but also partly because I am not running email servers and websites, right?

[2:37] I do supercomputing and my users are really literally helping to make the world a better place, so if we use a few extra watts who cares. But as the volume around the green computing concepts kept getting cranked up, I started to wonder if there really was something that I should at least know a little bit more about before I started ignoring.

[2:59] And then I actually did a couple of stories with Pete Beckman at Argonne, who we will hear more from in-depth later in the series. He is someone I have a lot of respect for and Argonne is clearly a leader in HPC. They were talking about stuff they were doing around reducing energy consumption in supercomputing, but it was all very practically focused, reducing energy cost to really budget pressures or to make bigger and bigger computers fit into a fixed energy budget. Things like using outside air to cool their data center and this stuff made a lot of sense to me.

[3:29] So then I noticed that even more people who I really respect and who are real leaders in the community were talking about green HPC, and it started to seem like I really needed to know more about this. And that’s what this series is about. And the eight or so episodes, we are not really sure exactly how many it is going to be, but eight seems like the right number right now.

[3:48] We are going to look at energy use in high performance computing, energy use in the computers and in the support infrastructure, air conditioners and stuff like that, and talk about the various ways that people and companies are managing and reducing their energy use right now, and what some of the innovative research may lead to in the future.

[4:05] So, we will hear from people with an environmental commitment, but in HPC that commitment isn’t the only reason people are thinking green. If it was, I don’t think it would make much difference. It certainly wouldn’t be getting as much attention as it is today, but reducing energy use in the chips, in the systems and in the cooling systems is letting people do even more computing than they could do before.

[4:25] So, the business driver which is more computing despite cause for facilities constraints lines up with the environmental driver which is less energy use, and that turns out to be a pretty convincing case. So we have got all of this started because of the hype that we were hearing, right? All of the advertising in booths and even computers with some variation of “Go green” stuck on them, first in IT and then it crept over into HPC.

[4:51] We wanted to talk with someone who has been thinking about green in HPC for a long time and get a sense from him about how the hype is built up and what’s actually behind it. So we called up Wu Feng, one of the motive forces behind the Green500 list that was launched in 2007 to get his take of all the marketing and the hype.

[5:10] He started by telling me about a “birds of a feather” session that he helped organize at ISC in 2008 in Germany. He went around taking pictures of all the booths on the show floor and then showed those pictures at the start of his session to make a point about where we are today. He actually finds some value in the hype.

Wu-chun Feng

[5:28] If you are going to have something green in your booth, you are the exception rather than the norm. And like I said, it goes through different phases. Before this, the “G” word was actually grid and it was very difficult to find a booth that didn’t mention the word “grid” somewhere in the booth. And so we are going through these different phases, so is this hype?

[6:04] For now, there is some level of hype and I think the hype is a good thing to bring attention to a problem that has largely been ignored and scoffed at. In fact in early 2000, I gave talks on energy efficient computing at a time that I literally was booed or hissed off stage and it has come around full circle.

[6:32] People are starting to understand that this is a problem and so the real issue in that being is how you separate the hype from the substance. I think a little bit of hype is good in the sense that it draws attention to the problem, but what needs to be done is to have some substantive follow-up on to that.


[6:56] Yikes, booed off the stage, I’m not sure I’d handle that too well, but it also tells an interesting story about HPC people and how we are really focused first on getting really high levels of performance and getting that performance applied to some of the biggest problems in the world.
[7:12] Anything that gets in the way of that just has to go. And we are technologists, not environmentalist, for the most part anyway. It is just not ever been part of the mainstream HPC conversation to talk or care about the environment as part of getting the job done, which is kind of natural.

[7:29] If you think about it, we don’t work outside. Computer rooms don’t produce waste products that get dumped into streams and computer rooms are usually pretty sterile places with very controlled environments and they are just about as far from the outside as you can get.

[7:45] So then we called up Wilf Pinfold, he is the Director of Extreme Scale Systems at Intel and he is also the General Chair of SCO9 in Portland this year and we asked him about the hype and the realities of green computing. In particular we wanted to get his perspective on whether HPC and supercomputing should just be viewed by society as exempt from concerns about energy use from an environmental perspective.

[8:08] I mean we have cars and airplanes and power plants all belching out way more carbon into the atmosphere than we might consume as a result of the power we use in our data centers, right? And the computations we support make better Kevlar vests that save lives, make safer and more efficient cars, make safer airplanes, predict earthquakes, predict the weather, so why don’t we just get a pass since what we do is so fundamentally important?

Wilf Pinfold:

[8:35] You know I am not sure. I would think of it a slightly different way. I would say the barrier of bringing the value of technology that we currently are capable of, the barrier to bringing that source to the world is an energy barrier and that we all want to use the energy responsibly so as we can have more technology.

[9:01] Because technology, as you point out, is why do we don’t have mass starvation in various parts of worlds, why we don’t have rampant epidemics, we use technology, not just computing technology which obviously is a big part of it, but the technology that we have today enables the society that is far better from a number of ways than society without technology.

[9:29] And the barrier to getting it everywhere is currently energy use and we need to think about energy not from a perspective of cutting back as much as it is a limited resource. I don’t think any of us can afford to be given a pass, but I think there needs to be a true understanding of the fact that we are doing this not to cut back on technology, but to make technology available to a number of people.


[10:05] So, what has happened over the past year or two is that the downward price pressures that commoditization has brought to the components of supercomputers have made them increasingly cheaper. And this has actually been going on for a long time, but they have gotten so much cheaper and so much larger that in the larger centers, say the top 100 at least, they are now able to in many cases buy more computer than they can cool or power.

[10:32] And so that means that they are having either to build out new facilities or at least put what in some cases can be millions and tens of millions of dollars into new cooling and electrical infrastructure to power the computers. But we have this problem at my own work, where we are investing a lot of money to add another 1000 or so tons of cooling in eight megawatts of power capacity.

[10:53] And so with all of these changes, people have started to realize that reducing energy use would let them install much larger systems without building out more or at least much new facility or infrastructure. And so this brings us to where we are today where the ideas around GreenHPC are starting to be seriously looked at by the major centers.

[11:11] We talked to Horst Simon who is the Associate Lab Director at Lawrence Berkeley for Computing Sciences among other titles that he has and asked him what he thinks about all this. And in particular we wanted to know from him his take on whether the HPC community actually cares about its impact on the environment, about reducing its carbon footprint and should it care?

Horst Simon:

[11:35] Ah, this is a real tricky question. As citizens of the world, of course we all should care about reducing our carbon footprint. However, I think actually that if you ask specifically, does the HPC community care about reducing its carbon footprint, I will be sort of a contrarian and say, actually the HPC community doesn’t really care about reducing its carbon footprint.

[12:05] What the HPC community does care about is doing more computing and what has happened in the last two or three years is that the high cost of electricity and the fact that in many instances the supply of power for computer rooms and for centers has become a barrier to further expansion. These two factors are a big concern to the HPC community.

[12:31] So I don’t think that the HPC community and I maybe too cynical here, is green at heart. The HPC community has been about computing and wants to do more computing, will continue to do more computing. So the HPC community is, I believe interested in energy efficiency simply because by reducing the electricity bills, you can buy more computers and you can do more computing.


[12:54] So, a lot has been made in the green IT community and in that conversation about the idea that IT uses between one and 2% of all energy consumed in the US. This actually doesn’t seem like a very big number to me and didn’t make a very compelling case. So there are much larger segments.

[13:17] And one of the things that Dr. Simon mentioned while we were talking is that making an impact on the profile of our national energy consumption really requires some major policy focus on the consumers of power and the generators of carbon emissions that hold a much larger share of the pie.

[13:35] And I think this idea is really pretty reasonable and it resonates with me. I mean when you start optimizing an application for performance, you don’t start in the subroutines that use 2% of the compute time. You start in the routine that uses 25% of the time and then you work your way down.

[13:51] But here Horst made a really interesting point about the way that the conversation about the environment and power consumption and computing has gotten muddled up. He points out that they aren’t the same thing at all.


[14:03] And to put it also another way, there is often a tendency to equate power consumption with carbon footprint. That equation is not exactly correct. You have to look at where your power is coming from. If you have carbon neutral power or other power that is coming from whatever, from wind or solar and you drive your computing center just with that carbon neutral power, it doesn’t really matter how much you consume, you are not increasing the carbon footprint.


[14:40] And you know that point of view really makes a lot of sense, but until I heard him say it, it didn’t occur to me how confused my thinking was about carbon footprint and about reducing impact on the environment versus reducing energy consumption with, for example, a locally generated power source like a wind farm or solar generator. So we are going to hear a lot more from both of those guys later on in the series.

[15:06] One of the things that are important to understand when you are thinking about the big picture in GreenHPC is that all of these things do sort of come together. You have got a background motivation or maybe it is a parallel motivation to do the right thing for the environment.

[15:21] And then there is the element of organizational objectives where you get points because you are saving money for the company or you happen to work for one of those organizations that have a top 10 commitment to saving energy or protecting the environment.

[15:36] And then for, I would say most of the high performance computing community, certainly the high end of the HPC community, there is the big motivation of using less energy so that we can do more computing.

[15:46] One of the people that we really wanted to talk to was Dan Reed at Microsoft Research. He spent his whole career thinking about big computing and about the impact of big computing on our national competitiveness and on the future of science and engineering and research and development.

[16:03] Now he works for Microsoft thinking about how the software and hardware technologies available today or being created in his organization are going to help us get even bigger and manage all of that complexity. So anyway we called him up and this is part of what we talked about.

Dan Reed:

[16:20] There are several practical, political and ethical issues there. Let’s start with maybe the practical first. All of the people who are deploying clusters in the academic research space, even the sort of departmental research clusters that are scattered around national labs and that the companies have as well, just about every CIO or university chief research officer I talked to is struggling with the demands on physical plant and the aggregate costs of infrastructure and the energy associated with those clusters.

[17:06] So, from a very practical perspective, anything you can do to make them more energy efficient is going to reduce physical plant cost and operating cost. And in economically constrained environment where we are now, any organizational efficiency is goodness and that’s going to make a practical difference. So, at that level, there has to be a practical reason to care.


[17:30] So, if I could just interrupt you for a second.


[17:33] Sure.


[17:34] Kind of bouncing off that idea, in my program for example, we’ve got a fixed acquisition budget, 40 to $50 million a year and we buy all the flops that you could get for 40 or $50 million. And we kind of make everything else work out in terms of the power budget and we will throw in another transformer if we need to, to bring more juice and eventually we are not going to be able to do that anymore.

[17:56] Are there two different segments in high-end computing? I guess you could even go all the way down to just regular HPC computing, where you have got the one group that if you could get a Peta-Flop in three megawatts, would buy two Peta-Flops because they have a six megawatt data center. And then is there another group that maybe has a fixed requirement and the cheaper they can run at that level of capability the better?


[18:25] Yeah, I think there is some bifurcation like that. I mean one of my friends was reminding me there is an old economic paradox named after an old British economist prior to this century. It was in the context of coal and steam called Jevons Paradox. It says if you make something cheaper and more efficient, it doesn’t mean that demand will go down, it may actually go up. And what you said at the high-end is sort of an example of that.

[18:56] So, let me first finish the low-end part of your question and my comment and then I will come back to the high-end. I think in the low-end, those things are often just driven by very practical physical constraints. If you are a university researcher and you are research computing infrastructure is what you can put in the closet down the hall or what you can put in some minimally refurbished lab space, you have got very practical power and cooling constraints because it’s often put in places that aren’t necessarily fully architected for efficient cooling. So anything you can do that will lower the energy demand is going to allow you to save money and deploy things you could and that’s sort of part of your question.

[19:52] I think at the high-end, yeah, there is certainly at the leading edge are people who if we have the energy budget, we would simply buy twice as much. And there is a different dynamic at work in that market for sure, but I also think there are different set of issues that work above as we look petascale and beyond, what is actually going to be viable to build. And I think here is one of the dynamics that’s going to play out.

[20:26] Unlike the sort of corporate data center space where people — even corporate data centers, forget cloud data centers which are a whole other order of magnitude bigger — people place facilities based on a whole spectrum of capital and operating costs and no particular illusion that those facilities are actually closed to the people who are going to use them.

[20:57] And I think in the HPC space, we are still in a world where people want to have their machines nearby and not necessarily for technical reasons, but for show and tell and political reasons. And so if you relax that constraint, you can actually do a different kind of optimization than you can, if you say, well my 20-megawatt facility has to be next to my research institution or my government lab, because there are different set of dynamics at work there.


[21:30] So, unlike an email center for Lockheed Martin for example, we don’t just want to drop the amount of energy that we consume and it’s OK if it takes five minutes for an email to get there overnight as opposed to one minute. What we actually ought to be concerned about is responsible use of the energy that we consume, so building kind of looking from the data center at the whole system, the data center, the supercomputer and everything and making sure that we are getting the most science per watt for lack of a better metric out of the power that we do use. What’s your reaction to that?


[22:06] I think that’s right. There is actually a version of a talk I gave about the energy optimization issue. I’ve been making the point or trying to make the point that the right metric is really some ratio of effective operations to a product of watts times dollars. Because in the end if you want to minimize energy consumption, just turn your computer off and you are done.

[22:29] You are actually trying to accomplish something with it and you are trying to do so in an environment where you have capital cost constraints, but you also have operating cost constraints. And both of those translate into some metric that’s related to energy. And really large scale if you look at data centers, post petascale systems are going to be the same way; roughly half of the total cost of ownership is related to mechanical and electrical in one way or the other.

[23:06] The computing is no more than half and arguably a declining fraction if we stay on the commodity path. So, yeah I do think there is a quality, a value of the computation that goes with doing so in an environmentally friendly way.


[23:25] So, all of that was really helpful to me to frame the conversation about reducing the amount of energy we consume and about all of the issues that people are lumping into green computing and green IT in a way that’s very relevant to HPC, where in the past if we had been individually motivated to care about the environment, it wasn’t something that most of us brought to high-end computing at work in a way that informed the way that we managed our centers or as we will talk about later in the series, the way that we schedule our jobs or even the way that we write new algorithms.

[24:04] So, that’s it for this episode. But you can find out more about the topics and the people in this episode by going to and clicking on the link for the GreenHPC podcast series.

[24:16] In the next episode, we are going to talk a little more about the place that IT energy consumption occupies in the whole spectrum of ways that we use energy and the direction that we are headed with some of the new legislation being considered in the United States now and which is already been passed in other parts of the world.

[24:35] We are also going to talk about green in IT and green in HPC and where those things are different and where they are the same, both from the point of view of the workloads and also the solutions that apply to one and maybe not the other. And we have got a lot of really great conversation lined up for that episode. Until then, I’m John West, for all of us here at Thanks for listening.