Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Epic HPC Road Trip Continues to NREL

In this special guest feature, Dan Olds from OrionX continues his Epic HPC Road Trip series with a stop at NREL in Golden, Colorado.

Dan Olds from OrionX hits the road for SC18 in Dallas.

Steve Hammond, Director of Computational Science at the National Renewable Energy Laboratory (NREL), took the time to speak with me on my third tour stop. The mission of NREL can be simply stated as “advancing energy efficiency, sustainable transportation, and developing renewable energy technology.”

When it comes to energy-efficient computing, NREL has to be one of the most advanced facilities in the world. It’s the first data center I’ve seen where their current PUE is shown on a LCD panel outside the door. When I was visiting, the PUE of the Day was 1.027 – which is incredibly low.

PUE is Power Usage Effectiveness, a measure of data center energy efficiency. Basically, it’s a ratio of the power going into the data center divided by the power used to run the infrastructure inside the data center. In NREL’s case, for every 1.027 watts going into the data center, 1.00 watt of power is being delivered to the compute infrastructure.

This number becomes even more impressive when you take into account that the average data center has a PUE of 2-2.5 and the most efficient have PUE’s of maybe 1.2. This ultra-low PUE saves NREL something along the lines of $800,000 annually in lower energy costs. Their capture and reuse of waste heat saves them another $200,000 they’d have to spend to heat the rest of the NREL facility. Upon hearing this, I sagely comment “not too shabby.”

Eagle Supercomputer at NREL

Highlights:

  • We discussed Eagle, their newest supercomputer, an 8 Pflop monster that features a total of 2,114 Intel fueled compute nodes and 50 nodes featuring dual NVIDIA Tesla GPUs. Eagle is replacing Peregrine, with Eagle providing 3x the capability in the same footprint and using only slightly more power. Amazing.
  • We then turned to the topic of ever higher TDP’s on processors and the need for liquid cooling. Steve pointed out that when you get to 30 KW per rack, you really can’t cool that with air anymore and liquid is the only alternative. NREL’s average is between 40-60 KW per rack and, if they used air cooling, they’d need enough fan to blow the equipment across the room.
  • Another amazing fact came out in our talk: a juice glass of water contains the same cooling capability as a room full of air from a thermodynamic perspective. The pump energy used to move that juice glass of water around is 10x less than the fan energy needed to move the room full of air. In short, liquid is 1000x more efficient in cooling than air.
  • His biggest pet peeve? The fact that every liquid cooling vendor has different input and output connectors – so you can’t mix and match the same pipes when you’re using different systems. Vendors would be well advised to come up with a standard quick connect system and use it on all of their products.
  • We then visited the NREL computational center and I had a chance to get into some high-tech 3D visualization glasses and walk into their simulation of air going through a wind farm. It’s hard to describe the impact of wearing the glasses and looking at the model. The realism was incredible, you could easily see where the turbulence problems were the worst. The video is a bit choppy because I couldn’t put the glasses over the camera lens.
  • We also looked at a video model of airflow moving through a car to see, for example, how efficient the air conditioning was working under different dashboard configurations. All of their models were highly interactive, allowing you to see the various effects and figure out how to improve flow.

I saw a lot of very cool uses of technology at NREL (pun unintended – puns are the lowest form of humor, only limericks are lower). I was very impressed and excited by what I saw at NREL and can’t thank them enough for taking the time for my visit.

Many thanks go out to Cray for sponsoring this journey. Our next tour stop is at Los Alamos, where we’ll be interviewing the ever-popular Gary Grider…stay tuned for more of the HPC Road Trip.

Dan Olds is an Industry Analyst at OrionX.net. An authority on technology trends and customer sentiment, Dan Olds is a frequently quoted expert in industry and business publications such as The Wall Street Journal, Bloomberg News, Computerworld, eWeek, CIO, and PCWorld. In addition to server, storage, and network technologies, Dan closely follows the Big Data, Cloud, and HPC markets. He writes the HPC Blog on The Register, co-hosts the popular Radio Free HPC podcast, and is the go-to person for the coverage and analysis of the supercomputing industry’s Student Cluster Challenge.

Sign up for our insideHPC Newsletter

Leave a Comment

*

Resource Links: