The Cluster Challenge is an event in its second year of existence which encourages technical participation from university undergraduates. That’s right, undergrads. The teams are made of up to six of undergraduate students, a supervising professor and a partnering HPC vendor. The students will architect a machine with the support of their respective vendor organization. Not so tough, eh? The rules state that the students must constrain the design to a single rack powered by two 120-volt, 20-amp circuits [soft constrained to 13-amps]. Metered power distribution units will ensure team compliance. The teams are required to run a series of standard applications and benchmarks [we’ll get to this in a later article]. They’re judged on based on workload accomplishment, benchmark performance and the overall system architecture.
InsideHPC had the pleasure of interviewing each of this year’s Supercomputing Cluster Challenge teams. The questions ranged from architectural decisions to the challenges associated with integrating the various applications. First, lets introduce the teams.
.: Indiana-Dresden: A team split between continents, the University of Indiana/Dresden University team chose IBM as their partner vendor. Last year, Indiana made a splash at the SC07 Cluster Challenge with a cluster powered by Mac OSX.
.: University of Colorado: The University of Colorado team is also a returning competitor. They’ll be armed with their vendor from across the street, Aspen Systems. The Colorado team not only has experience from last year, but will certainly be well coached at the hands of Doug Smith.
.: Arizona State: The Sun Devils are coming in with a fresh perspective on the competition. They’re teamed with Microsoft on a Cray platform! This is terribly exciting. Not only will we get to see a new Cray platform in practice, but we’ll also get to see exactly how well Windows HPC Server works in production.
.: MIT: The crew from Cambridge is on an interesting journey leading up to the Cluster Challenge. Literally! In true college style, they’re road tripping all the way to Austin in a bio-diesel fueled bus. They’re currently withholding details of their respective vendor due to the interesting nature of the architecture. Check them out at a BioWillie station near you.
.: Purdue: The Boilermakers are back at Cluster Challenge this year. Last year, they had some great fun with the comic-book inspired team introductions and great booth eye-candy. This year, they’re on a track to win. Teamed with the low-power masters at SiCortex, they have the potential to pack a punch.
.: National Tsing Hua University (Taiwan): The team with furthest to travel, NTHUCS will back in Austin to push the limits again. Last year, they ran on a machine with a processor that was announced the same day by Intel. HP and Intel will be sponsoring them again so look for a good showing.
.: University of Alberta: The champions are returning. Last year, The University of Alberta came in armed with a wealth of knowledge and a cluster from SGI. They’re partnered with deep purple again for SC08. Paul Lu will most certainly have the team well prepared for a fierce competition.
Arguably, the most difficult aspects of this challenge are the architectural constraints. Power [and ultimately cooling] considerations are made in nearly all HPC procurements. We asked the teams how close they were prepared to architect the system to the predetermined power ceiling.
We overshot by quite a ways and tried to find ways to reduce usage (the “addition by subtraction” approach). We discovered quite quickly that the power increase represented by adding one more PE was
significant enough to make some of our minor performance tunings moot so we focused on trying to optimize performance relative to CPU and PCI bus speeds, which we have access to both through the BIOS settings and at the command line”, said MIT team sponsor Kurt Keville.
Maximizing the usage of the available power is one of our team’s greatest challenges. Our cluster is designed to run at the very edge of the rules without going over the limit, which means on some applications we run very close to 13 amps on each circuit, while at other times we run under by as much as 3 or 4 amps”, said Doug Smith, Team Colorado faculty sponsor.
Going past the initial architecture stages, the students are required to integrate and execute a myriad of different realistic and synthetic scientific applications. We asked each of the teams what the most difficult applications have been so far to integrate and operate.
The most difficult one should be OpenFOAM since it has many different solvers to solve different problems with different goals. Each solver has their own characteristic,” said NTHUCS team lead Dr. Yeh-Ching Chung.
In my opinion GAMESS and OpenFOAM are very tricky to install and to choose the right setup. Because GAMESS has to many options, so you have the risk you choose a low performance configuration and OpenFOAM has lots of dependencies, so its difficulty to get these things together,” said Jens Domke, senior mathematics major for the Indiana-Dresden team.
I’d like to reiterate the fact that these are undergraduate students performing the work. I don’t want to demean the students in any way. However, their current coursework is not completely based upon their respective area of study. Its completely within the realm of thought that these students go from debugging OpenFOAM compilation errors to writing term papers in political science. This type of exposure can be difficult for some students but a great motivator to broaden one’s horizons past textbook learning. Dr. Paul Lu, advisor to the Alberta team, said it best:
Speaking as a coach, the average undergraduate student does not have to deal with as large a variety of software (and, therefore, things that can go wrong) as the experience of the Cluster Challenge. For example, instructors (and systems administrators) go to great lengths to make sure that all of the right libraries and support software are installed in instructional labs. However, some of the Cluster Challenge applications require (or benefit from) specific libraries, drivers, compilers, and compiler flags. This kind of software configuration and tuning complexity is exactly what we tend to shield students from. The Cluster Challenge throws them into the deep end.
Thanks to Brent Gorda for leading the charge in the Cluster Challenge this year. Good luck to all the teams. I hope its a wild success!
For more info on the Cluster Challenge during SC08, check out the challenge page here.