PSC Retires Blacklight Supercomputer to Make Way for Bridges

Print Friendly, PDF & Email

Bridges_4c_stackedThe big memory “Blacklight” system at the Pittsburgh Supercomputer Center will be retired on August 15. Blacklight will be replaced by the new “Bridges” supercomputer.

Funded by a $2.8M award from the National Science Foundation, Blacklight is an SGI UV 1000 cc-NUMA shared-memory system that enabled users to run shared memory jobs of up to 16 Tbytes. In its five years of service, Blacklight’s vast data-sorting capability helped track early human migration, organized the genomic information of wheat, tracked irregular stock market trades, and studied national systems for live organ transplants.

Building on the lessons learned from Blacklight, the Pittsburgh Supercomputing Center received a $9.65M NSF award in 2014 to create Bridges, a uniquely capable supercomputer designed to empower new research communities, bring desktop convenience to supercomputing, expand campus access, and help researchers facing challenges in Big Data to work more intuitively.

Built by HP, Bridges will feature multiple nodes with as much as 12 terabytes each of shared memory, equivalent to unifying the RAM in 1,536 high-end notebook computers. This will enable it to handle the largest memory-intensive problems in important research areas such as genome sequence assembly, machine learning and cybersecurity.



In this video, Nick Nystrom from the Pittsburgh Supercomputing Center presents an overview of the Bridges supercomputer.

First and foremost, Bridges is about enabling researchers who’ve outgrown their own computers and campus computing clusters to graduate to supercomputing with a minimum of additional effort,” says Ralph Roskies, PSC scientific director and professor of physics, University of Pittsburgh. “We expect it to empower researchers to focus on their science more than the computing.”

Bridges’ capabilities stem from a number of technological innovations developed at PSC and elsewhere and which will see some of their first applications in the Bridges system:

  • Hardware and software “building blocks” developed at PSC through its Data Exacell pilot project, funded by NSF’s Data infrastructure Buliding Blocks (DIBBs) program, will enable convenient, high-performance data movement between Bridges and users, campuses and instruments.
  • Bridges will be composed of four types of HP servers integrated into a high-performance compute cluster:
    • HP Apollo 6000 Servers (some with integrated GPGPUs), providing scalable performance for interactivity and capacity computing.
    • HP ProLiant DL580 Systems, which will enable memory-intensive applications, virtualization, and interactivity, including large-scale visualization.
    • HP DragonHawk mission-critical shared-memory systems will provide maximum internal bandwidth and capacity for the most memory-intensive applications.
    • HP Storage Servers will support the PSC Data Exacell data movement.

While the research demands for high performance computing resources are growing, they are also expanding to a mix of compute-centric, data-centric, and interaction-centric workloads,” said Scott Misage, general manager, High Performance Computing, HP. “HP has the HPC leadership and experience, breadth of portfolio, services and support to partner with PSC in delivering the high performance computing solution that will empower its varied research communities to achieve new scientific breakthroughs.”

The Intel Omni-Path Architecture fabric will provide Bridges with the highest-bandwidth internal network, valuable optimizations for MPI and other communications, and provide NSF users with early access to this important new technology Intel Xeon-based servers. “The Intel Omni-Path Architecture will help PSC and the Bridges system provide a new level of performance and flexibility for Xeon-based solutions,” says Barry Davis, Fabrics GM at Intel’s Technical Computing Group.

Next generation Nvidia Tesla GPUs will accelerate a wide range of research through a variety of existing accelerated applications, drop-in libraries, easy-to-use OpenACC directives and the CUDA parallel programming platform model.

PSC will hold a launch event for Bridges in January 2016.

Sign up for our insideHPC Newsletter.