Sign up for our newsletter and get the latest HPC news and analysis.

Our man in Texas: the Ranger dedication

[Ed: our very own John Leidel packed up the Chevy Nova and headed over to TACC to watch the goings on at the Ranger dedication. That's right, we now have a nationally roving team of reporters! Sure, he only got in because HPCwire scored him an invite, and he does live close by. But still: roving reporters.

Here are John's thoughts. I've linked in a few pics, but you should check out his whole album of candids (mostly of the gear, of course) with captions from the event at http://picasaweb.google.com/john.leidel/RangerRibbonCutting .]

One full hot aisle.  The hot aisles are completely enclosed with APC cooling ducts.  Notice the door on the opposite end of the aisle.

I made the pilgrimage down to the JJ Pickle Research Center at the University of Texas for the dedication ceremony of Ranger last week, TACC’s new behemoth supercomputer. The list of attendees was surprisingly quite diverse. Principle investigators from a host of various disciplines, HPC heavy-weights from the petroleum industry, and several other big names in HPC all made an appearance.

Dr. Dan Atkins, Directory of the Office of Cyberinfrastructure was there to see what his $59 million bought the NSF. Hector Ruiz, CEO of AMD, was there caressing what has become the largest single repository of quad-core Barcelona silicon in the world. One note of interest was Sun Microsystem’s dedication to the event. Jonathan Schwartz, Scott McNealy, John Fowler and Andy Bechtolsheim (wearing his character sandals) all made the trip to Austin.

One full hot aisle.  The hot aisles are completely enclosed with APC cooling ducts.  Notice the door on the opposite end of the aisle.

Despite the long list of industry heavy-weights, all eyes seemed to be on Dr. Jay Boisseau, Director of the Texas Advanced Computing Center. Jay’s attitude at the event was analogous to that of a new father. There wasn’t a single moment where you couldn’t find Jay smiling ear-to-ear over the triumphant arrival of his silicon child.

And Ranger will certainly require the care and feeding typical of a newborn child. The 3,936 Sun blade modules emit a near-deafening whirl of noise. The power distribution units command more floor space than many large HPC installations, and lifting the bundles of Infiniband cabling would certainly make Arnold Schwarzenegger strain. Needless to say, I was a kid in a candy store.

Much of the content delivered during the ceremony was directed toward the real meaning of “open science computing.” Specifically, everyone discussed the impact of Ranger standing as the largest system in the world built for open science. I believe Dr. Atkins said it best. Ranger will be dedicated toward “finding knowledge needles in enormous data haystacks.”

The Ranger system allocations will be split three ways. 90% of the allocations are headed for the Teragrid. 5% are reserved for UT researchers and 5% are reserved for TACC’s STAR program. This translates to ~56,678 cores dedicated to Teragrid-allocated research projects. As such, 1.3 million cpu hours per day are directed towards open scientific research. TACC and the Teragrid already have over 500 users with allocations totaling over 100 million hours for the first quarter of operation. Those involved fully expect to allocate all 125 million hours of Teragrid use for the second quarter of production. Over its total lifespan, Ranger will deliver over 200,000 years of computational capability.

Ranger is most certainly the result of a successful partnership between vendor and end user. As we embark upon the age of petascale computing, it is imperative that vendors and end user organizations continue to form meaningful relationships in order to ease the pain of constructing and operating systems at such scale.

For more information on TACC and Ranger, point your browser to www.rangersupercomputer.com.

Trackbacks

  1. [...] video tour of TACC’s Ranger supercomputer.  Steve’s videography is much better than my lowly attempt at taking [...]

Resource Links: