PSC’s Bridges Supercomputer Brings HPC to a New Class of Users

Print Friendly, PDF & Email
Nick Nystrom, director of strategic applications for Pittsburgh Supercomputing Center at SC15

Nick Nystrom, director of strategic applications for Pittsburgh Supercomputing Center at SC15

The democratization of HPC got a major boost last year with the announcement of an NSF award to the Pittsburgh Supercomputing Center. The $9.65 million grant for the development of Bridges, a new supercomputer designed to serve a wide variety of scientists, will open the door to users who have not had access to HPC until now.

Nick Nystrom, PSC director of strategic applications and principal investigator on the project explains, “Bridges is designed to close three important gaps: bringing HPC to new communities, merging HPC with Big Data, and integrating national cyberinfrastructure with campus resources. To do that, we developed a unique architecture featuring Hewlett Packard Enterprise (HPE) large-memory servers including HPE Integrity Superdome X, HPE ProLiant DL580, and HPE Apollo 2000. Bridges is interconnected by Intel Omni-Path Architecture fabric, deployed in a custom topology for Bridges’ anticipated workloads.”

The new supercomputer will bring a whole new level of computational capabilities to researchers working in a diverse range of fields including genomics, social sciences and the humanities. Based on the NSF grant, the acquisition of Bridges began in December 2014 with a target production date of January 2016. Compute nodes and the Intel Omni-Path Architecture fabric are being delivered by HPE based on an architecture designed by PSC. It will feature advanced technologies from both Intel and NVIDIA.

Nystrom emphasizes that one of the major goals of Bridges is to attract non-traditional users of high performance computing (HPC). “Bridges is designed for applying HPC to new kinds of research, in addition to traditional applications,” he explains. “Many physical scientists and engineers have in-depth knowledge of how to work with complex, parallel applications. They can make good use of any supercomputer’s resources, including Bridges.

The other group of users are all those who are facing the increasing demands of big data and hitting the wall on their laptops or departmental computers,” he continues. “For example, we’re working with digital humanists and philosophers who increasingly are wrestling with large datasets using tools such as Java, R, Python and MATLAB on their desktops. They need to scale those familiar tools up to more memory and more cores, which is beyond the capacity of their workstations.”

Bridges will change all this, says Nystrom. Non-traditional users will be able to not only work with very large datasets, but also develop community datasets that can be shared over the NSF XSEDE network with research facilities around the country. Also, Bridges can connect to the universities, providing additional computational capacity when needed and federated identity management.

Will Bridges Work for Me?

Both traditional and non-traditional users will be interested in Bridges. There are a number of criteria that potential users can apply. For example:

  • You want to scale up your research beyond the limits of your laptop while still using familiar software and user environments
  • You want to collaborate with other researchers whose expertise complements your own, leveraging common datasets

Your research can take advantage of any of the following:

  • Rich data collections – Rapid access to data collections will support their use by you, other individuals, collaborations and communities.
  • Cross-domain analytics – Concurrent access to datasets from different sources, along with tools for their integration and fusion, will enable new kinds of questions and open up new avenues of research.
  • Gateways and workflows – Web portals will provide intuitive access to complex applications without requiring supercomputing expertise.
  • Large coherent memory – Bridges’ 3TB and 12TB nodes will be ideal for memory-intensive applications, such as genomics and machine learning.
  • In-memory databases – Bridges’ large-memory nodes will be valuable for in-memory databases, which are important due to performance advantages.
  • Graph analytics – Bridges’ hardware-enabled shared memory nodes will execute algorithms for large, nonpartitionable graphs and complex data very efficiently.
  • Optimization and parameter sweeps – Bridges is designed to run large numbers of small to moderate jobs extremely well, making it ideal for large-scale optimization problems.
  • Rich software environments – Robust collections of applications and tools, for example in statistics, machine learning and natural language processing, will allow researchers to focus on analysis rather than coding.
  • Data-intensive workflows – Bridges’ filesystems and high bandwidth will provide strong support for applications that are typically I/O bandwidth-bound. One example is an analysis that runs best with steps expressed in different programming models, such as data cleaning and summarization with Hadoop-based tools, followed by graph algorithms that run more efficiently with shared memory.
  • Contemporary applications – Applications written in Java, Python, R, MATLAB and other popular languages will run naturally on Bridges.

User-friendly Features

For both its traditional and non-traditional users, Bridges provides a wide range of user-friendly HPC features including extensive data analytics.

  • Interactivity – Of all the features, interactivity is the most frequently requested by non-traditional users. Interactivity provides users with on-demand access to the supercomputer and immediate feedback for conducting exploratory data analytics and testing hypotheses. Bridges will offer interactivity through a combination of virtualization for light-weight applications and dedicated nodes for more demanding apps.
  • Gateways – Gateways and the tools for building them will provide easy-to-use access to Bridges HPC and data resources. Users will be able to launch jobs, orchestrate complex workflows and manage data from their web browsers without having to learn how to program supercomputers.
  • Virtualization – The creation of virtual machines (VMs) will enable flexibility, customization, security, reproducibility, ease of use, and interoperability with other services.
  • Databases – Dedicated nodes for databases and web servers will enable sophisticated and efficient data management and modern, distributed application architectures.
  • Hadoop and Spark – Users will be able to leverage extensive software stacks for Hadoop and Spark, for example, for Big Data and machine learning applications. Integrating those environments into Bridges also allows more flexible workflows that integrate applications written for Hadoop or Spark with other kinds of applications.

Says Nystrom, “These innovations will allow researchers to use the system in the way that feels most natural – from domain experts who don’t want to learn parallel programming to traditional HPC users who wish to tailor specific applications to their needs.”

Data-intensive Architecture

Bridges architecture has been designed to meet the HPC needs of both traditional and non-traditional users representing a broad spectrum of domains.

At the heart of the system are three types of HP servers integrated into a high-performance compute cluster. They include:

  • HPE Apollo 2000 Servers (some with integrated GPUs), providing scalable performance for interactivity and capacity computing.
  • HPE ProLiant DL580 Systems, which will enable memory-intensive applications, virtualization, and interactivity, including large-scale visualization.
  • HPE Integrity Superdome X mission-critical shared-memory systems will provide maximum internal bandwidth and capacity for the most memory-intensive applications.
  • Bridges will include the latest Intel Xeon CPUs, and NVIDIA Tesla dual-GPU accelerators will boost performance for a variety of applications.

The architecture includes three tiers of large, coherent shared memory nodes. Memory per node includes: 12TB for applications such as genomics, machine learning and other extreme-memory applications; 3TB supporting tens of nodes for virtualization and interactivity including large scale visualization and analytics; and mid-spectrum memory-intensive jobs; and 128GB and hundreds of nodes for executing most workflows, Hadoop and capacity computing.

The Intel Omni-Path Architecture fabric will provide Bridges with an extremely high-bandwidth internal network optimized for MPI and other communications. The fabric will connect all nodes and the shared file system. Among its advanced capabilities are a 100 Gbps line speed per port, a 25GB/sec bidirectional bandwidth per port, and the ability to handle 160M messages per second.

Bridges Target Schedule

  • Construction of Bridges is beginning in late 2015, including the start of the early user period
  • Phase 1 to be completed in early 2016
  • Phase 2 Technical Update scheduled for summer 2016

Bridge to the Future

In the world of traditional supercomputing, grand challenges in such fields as physics, fluid dynamics, cosmology, and climatology, require big systems that require high arithmetic speed. PSC’s Bridges not only takes these big batch jobs into account; it also allows non-traditional users to move beyond the computational constraints of their workstations. Non-traditional users can benefit from HPC, using familiar applications and high-productivity programming languages. They don’t have to become experts in parallel programming, the domain of traditional supercomputer users. Bridges makes the democratization of supercomputing a reality.

Sign up for our insideHPC Newsletter