2008 Cluster Challenge Results

Print Friendly, PDF & Email

Submitted by Brent Gorda. The second Cluster Challenge was held a few weeks ago in association with SC08 in Austin and, with the cycling theme, has been described as the “Tour de France of SC”.  The peloton consisted of 7 teams gathered from 4 countries to build and run a supercomputer on benchmarks and applications for 46 straight hours.  The level of activity associated with the event is amazing with individuals working intensely until they exhaust themselves and slump in place to sleep (and I have photos to prove that).

The event is intended to expose new talent to our community, encourage HPC as a topic of study in Universities, and show that clusters have arrived.

This year’s activity started out with a team time trial using the HPC Challenge benchmarks and done on Monday prior to the official start.  The NTHU team from Taiwan posted 703 GFlops on Linpack, inching out Purdue who obtained 694 GFlops: a difference of just over one percent.  To put this into perspective this performance would rank #4 on the Top 500 only 10 years ago.  It is mind-blowing that a team of undergraduates can build a system that rivals what the best scientists and biggest budgets could achieve a mere 10 years ago.  If you believe (like I do) that the national labs were doing interesting and important work on those systems, you’ll understand why I am so excited about the potential of half a dozen undergraduates and a small cluster. (Note that we limit the teams to 26 amps at 120 volts – with at least one team achieving over a Teraflop after the event without the power restriction).


The real impact of these systems is, of course, what can be done with them.  Thanks to the magic of open source, researchers world-wide have been publishing their efforts for all to use.   Most of these applications are directed at Linux based cluster architectures, which added to the workload for the ASU team.  This year, our teams were asked to work with OpenFoam (Computational Fluid Dynamics), WPP (Wave Propagation Program), POY4 (Phylogenetic Analysis of DNA and other Data using Dynamic Homology), RAxML (Randomized Axelerated Maximum Likelihood) , and GAMESS (General Atomic and Molecular Electronic Structure System).  As the list suggests, these are serious applications.  In fact, they are the same applications (and same versions) in use by scientists around the world performing their day-to-day research right now today.

Teams are required to run the HPC Challenge benchmarks on Monday in the team time trial and then these applications once the real event starts and for the next 46 hours – all within the 26 amp power budget with the same hardware configuration.  To setup for this activity, teams have to find and collaborate with a vendor who will loan the hardware and help with training.  It is completely up to the teams how to divide up the work and get the applications running on their systems.  This is a fabulous opportunity for the teams to learn about HPC, but it is also a huge effort and it takes a brave sole and committed team effort to venture up that mountain.  Like the famous climb of l’Alpe d’Huez on the Tour de France, it takes genuine and sustained effort behind the scenes to be able to show up on race day and perform. The team coaches are to be congratulated for their efforts to bring the event together at their institutions.

And perform they did!  With the conference attendees as witness, teams were given the data sets for the first time at the opening gala on Monday evening.  They rapidly triaged the workload and then spent until Wednesday afternoon optimizing the throughput of their systems.   As the event progressed, the behind-the-scenes activity was intense as teams dealt with issues such as fully replacing pre-release hardware when the anticipated vendor product announcement did not happen, re-burning low level proms to control core clock rates and rebuilding a cluster around a different interconnect (with networking equipment scavenged from the show floor).  The resourcefulness of these teams is inspirational and a big part of why the event is so much fun for the conference volunteers and attendees to be involved in.

While I continue to insist that all the teams are all winners and each individual a shining star, ultimately only one team can win.  This year the results were close with the combined team of Indiana University and Technische University Dresden (with IBM) inching above the second place team, NTHU, from Taiwan (with HP).  The additional effort on behalf of the German and Taiwan teams is duly noted and I congratulate both on a successful trip.  Third place was last year’s winning team, the University of Alberta from Edmonton, Canada.  It is interesting to note that, like our cycling theme, the field of competitors is truly international.

The remaining teams and vendors also deserve congratulations for their efforts.  They are: Purdue (SiCortex), University of Colorado (Aspen Systems), Arizona State (Cray/Microsoft) and MIT (Dell)/AMD.  Each has a spectacular story to tell, memories/friends to last a lifetime and hopefully more than a few opportunities to become part of the HPC family.

The Cluster Challenge committee believes strongly in this event and we work diligently to encourage our communities to take interest.  We try to setup an environment where conference attendees can interact with teams, and hope for synergistic opportunities for both to materialize.  We are excited when we hear of matches and can report that there have been several job offers out of the event.  My personal definition of success for this event is for each team member to have an opportunity to become part of the HPC family.  By working with seedlings, we build a forest that will endure and solve the incredible challenges that lay ahead on the path to exaflops and beyond.

Congratulations to all the teams and individuals who participated in the event!  Thanks to the committee, judges, and supporting vendors.  Success of the challenge is entirely due to your efforts.  Finally, we want to give a big thank you to Pat Teller, General Chair SCO8 and the Sponsors of SC08, the ACM & IEEE Computer Society.

Brent Gorda works at the Lawrence Livermore National Laboratory as Deputy for Advanced Technology Projects.  He is an avid cyclist and a bit of an old timer in the HPC community.  Brent is enthusiastic about the growth of HPC that cluster computing represents and of expanding the community with activities such as this.