LLNL's Hyperion testbed [UPDATED]

Print Friendly, PDF & Email

There was a lot of talk last week about Hyperion, the technology testbed project led by LLNL along with 10 other technology partners (including Intel and Dell). We should probably do a feature article on it for HPCwire, but until then here are a few nuggets I’ve run across. First, from the Intel web site (no permalink, sorry)

The Hyperion project, a partnership between Lawrence Livermore National Laboratory (LLNL), a world-class research and development facility, Intel and nine other leading technology companies was highlighted this week at Super Computing (SC)’08 in Austin. Hyperion will consist of a world class Linux cluster to support partner development and large-scale testing. The final Hyperion deployment, which will be Intel Cluster Ready, will provide high performance computing (HPC) assets of more than 1,000 nodes. Intel, LLNL and Dell will collaborate on the installation and deployment. The cluster currently uses the 45-nanometer Intel Xeon processor 5400 series to reach its PetaFLOP computing and storage capabilities. Intel’s next generation microprocessors will figure prominently in the next phase of the project.

This is what Dell (Michael) had to say about it during his keynote at SC08 (transcript and audio here)

Now, speaking of amazing, I also want to share some great news for the entire high performance computing community. Dell and Lawrence Livermore National Labs, and nine other vendors, have teamed together to develop Hyperion. This is a 96 teraflop test bed, which is 100 percent dedicated to tackling your biggest challenges.

And hyper-scale computing environments really present some pretty unique challenges. I mentioned earlier this issue of the need for peta-scale software, applications that can take advantage of this enormous power. Storage, connectivity, management software: These are all challenges that we’re going to be dealing with as we implement systems of this scale.

So, Hyperion is a test bed big enough to really test scale, and it will share those breakthroughs with the entire Open Source community.

And, according to info elsewhere on Dell’s web site

The National Nuclear Security Administration’s Advanced Simulation and Computing Program at LLNL expects Hyperion, created with a consortium of eight additional HPC industry leaders, to speed the development and reduce the cost of powerful HPC clusters vital to U.S. Department of Energy and National Nuclear Security Administration missions, from national and homeland security to energy, climate change, and other global challenges. Hyperion will also enhance U.S. competitiveness in HPC.

And a few quotes attributed to Dr. Mark Seager, Head of Advanced Computing Technology, Lawrence Livermore National Laboratory, from the same link

  • Hyperion enables the development and scaling of critical Linux cluster technology that will make Linux clusters more affordable and much easier to use.
  • The Hyperion SAN test bed will enable us to attack problems associated with deploying petascale simulations environments.
  • This will allow us to apply more powerful computational resources to the broader set of Department of Energy and National Nuclear Security Administration missions we support, from national and homeland security to climate change and finding new energy sources.

[Update] Well, I probably don’t need to do that article after all. Timothy Prickett Morgan at The Register has a story about Hyperion, and it’s role in the ecosystem. The goals bear some resemblance to what appear to be the goals of the HPC Advisory Council’s cluster, which I discussed here

To that end, Intel will be supplying a bunch of the current “Harpertown” quad-core Xeons for Dell to plunk into the servers it is providing as part of the procurement. Dell’s Data Center Solution division, which sells custom-made servers for HPC and other hyperscale customers, is actually managing the manufacturing of the servers, and will refresh the Hyperion cluster with future “Nehalem” multicore chips as soon as they are available. The plan, according to Scott, is to put in place a cluster that has at least 100 teraflops of computing power in the initial cluster, and then have ISVs test the hardware and their own software on the machine. For now, the Dell iron will be equipped with Linux, but it is conceivable that some ISVs want to test their apps on Windows – as well as the scalability of Windows HPC Server 2008 – so Microsoft could at some point get involved too.

…Projects like Hyperion are designed to get more apps to scale on big boxes – and more quickly. This is one factor that will drive sales of the latest Intel hardware and the HPC applications that are tuned for it. Without the software tuning, a core is just a core, and hardware sales will slacken or possibly drift to other platforms that have been tuned. And Intel surely cannot afford for that to happen.

Trackbacks

  1. […] course this isn’t the first test cluster in HPC. There is the Hyperion testbed, a 1,000 node cluster announced at SC last year, and the HPC Advisory Council (anchored by QLogic […]