Sign up for our newsletter and get the latest HPC news and analysis.

Green HPC Podcast Episode 3: What do I get out of going green?

Green HPC Podcast Episode 3: What do I get out of going green?


Despite the rhetoric, saving the environment doesn’t seem to be what motivates HPC people to go green. So what are the reasons that people have for caring about green in HPC, and in particular what do large datacenters get out of going green? Change is hard, so why are datacenter owners and managers making the change?

Learn more about the series.


We are proud to have this episode sponsored by Cray

Listen to Episode 3: What do I get out of going green?

Download Episode 3

Get the transcript

In this episode we examine the reasons that people have for caring about green in HPC, and in particular what large datacenters are expecting — and what they are seeing in practice — as they manage their energy consumption more carefully.

When you first tune into this green supercomputing conversation, you’ll find a lot of different people talking about really three different ways of selling this idea. People talk about the very pragmatic approach where you look at a fixed pot of money available to run a machine over its lifetime, and the less money they have to put into paying for things like power and cooling the more money they can invest in the computer itself, so they want to minimize energy use as a way to maximize computing. Then there are the people who will argue the merits of green computing and energy reduction on costs alone: simple financial responsibility angle.

Finally, as we’ve heard already in this series, the most obvious take on green computing — doing it to save the environment — is in some ways the least common of the arguments. It is a concern, but it doesn’t seem to be the one that gets people out of bed.

And that’s what we want to talk about on this show. People don’t generally make a change without incentive, and we wanted to understand what the incentives in favor of taking green measures in HPC are, and which ones are shaping customer behavior.

One of the stories that encapsulates many of these forces driving us toward green computing today comes from IBM. We talked to IBM’s Dave Turek about what the company was thinking 10 years ago when they started thinking about Blue Gene, and what lessons the success of that machine in doing lots of computations in an energy efficient way have for us today.

We also revisit our conversation with Pete Beckman from Argonne’s National Leadership Computing Facility, Steve Scott at Cray, and Sumit Gupta from NVIDIA, about what their customers are looking to get out of taking green steps in HPC.

Listen to Episode 3: What do I get out of going green?

Download Episode 3

Get the transcript

Guest Bios and Links

Steve Scott, Cray

Steve ScottSteve Scott is Senior Vice President and Chief Technology Officer at Cray Inc., where has been since receiving his PhD in computer architecture from the University of Wisconsin at Madison in 1992. Steve was the Chief Architect of multiple systems at Cray, architected the routers for the Cray XT line and follow-on systems, and is leading the Cray Cascade project funded by the DARPA High Productivity Computing Systems program. Steve holds over twenty US patents, and has served on numerous program committes. He was the 2005 recipient of the ACM Maurice Wilkes Award and the IEEE Seymour Cray Computer Engineering Award.

Pete Beckman, Argonne National Lab

Pete BeckmanPete Beckman is a recognized global expert in high-end computing systems. During the past 20 years, he has designed and built software and architectures for large-scale parallel and distributed computing systems. Pete joined Argonne National Laboratory in 2002, as Director of Engineering, and later as Chief Architect for the TeraGrid, where he designed and deployed the world’s most powerful Grid computing system for linking production HPC computing centers for the National Science Foundation. In 2008 he became the Director for the Argonne Leadership Computing Facility, which is home to one of the world’s fastest and most energy efficient supercomputers for open science. He also leads Argonne’s exascale computing strategic initiative and explores system software and programming models for exascale computing.

David Turek, IBM

Dave TurekDave Turek is the Vice President for Deep Computing at IBM. He has business responsibility for high performance computing solutions including Power, Intel and AMD based servers and workstations, Blue Gene systems, visualization solutions, and future technologies. Prior responsibilities included the launch of IBM’s Linux Cluster business, IBM’s early involvement with High Performance Grids, and development responsibility (hardware and software) for the IBM SP (affectionately recognized as Deep Blue, world chess champion, retired). Dave is also a member of the Council on Competitiveness High Performance Computing Advisory Committee. Dave has degrees in Philosophy and Mathematics and has studied at the University of Rochester, Trinity College and the University of Pennsylvania.

Sumit Gupta, NVIDIA

Sumit Gupta Sumit Gupta is a Sr. Manager in the Tesla GPU Computing HPC business unit at NVIDIA since 2007.  In this role, Sumit is responsible for marketing and business development of the CUDA-based GPU computing products.   Prior to this, Sumit served in a range of positions such as Product Manager at Tensilica,  Entrepreneur-in-Residence at Tallwood Venture Capital, Post-Doctoral Researcher at University of California San Diego and Irvine, chip designer at S3 Inc, software engineer at IBM and IMEC, Belgium.   Sumit has a Ph.D. in Computer Science from the University of California, Irvine and a B.Tech. in Electrical Engineering from the Indian Institute of Technology, Delhi and has authored one book, one patent (awarded), several book chapters, and more than 20 peer-reviewed conference and journal publications.