Happy Birthday Ranger

Print Friendly, PDF & Email

The folks at the Texas Advanced Computing Center [TACC] are celebrating a birthday today.  Their massive cluster system called Ranger just turned 2!  The 579.4TF machine was the first in the NSF “Path to Petascale” program.  It still remains in the top ten of the Top500 list.

We’re proud of the fact that Ranger has been so widely requested and used for diverse science projects,” said Jay Boisseau, principal investigator of the Ranger project and director of TACC. “It supports hundreds of projects and more than a thousand users — and you don’t attract that many projects and researchers unless you’re running a great, high-impact system. Ranger is in constant demand, often far in excess of what we can provide.”

Ranger consists of 15,744 quad-core AMD Opteron processors, housed within a Sun Constellation cluster.  Beginning Feb 4, 2008, Ranger has enabled 2,863 users across 981 unique research projects to run a total of 1,089,075 jobs and 754,873,713.8 hours of processing time to date.

Ranger enabled the open science and engineering communities to address challenging problems in areas such as astrophysics, climate and weather, and earth mantle convection at unprecedented scales,” Muñoz said. “Ranger truly was a vanguard in NSF’s “Path to Petascale” program and is a testament to what can be done when ‘thinking out of the box.'”

Ranger was a first in many aspects of building and integrating large-scale, commodity HPC.  As a result, the lessons learned in operation has made its way back into many projects.  Infiniband technologies, Lustre, OpenMPI, MVAPICH, and Sun Grid Engine have all reaped the benefit of Ranger’s scale.

These software packages have been significantly improved because of the Ranger project, and are now downloaded by people building clusters in other places. Thus, the Ranger project has had a huge impact on other clusters and the science done on those clusters around the world,” Boisseau said.

Congrats to Ranger and the folks at TACC for advancing the deployment of such super-scale machines.  For more info, read the original article here.