US and European HPC Centers to Collaborate for Better Science

Print Friendly, PDF & Email

praceResearchers in the USA and in Europe have been offered access to some of the world’s most sophisticated supercomputers, as result of three initiatives announced this month.

The US National Science Foundation (NSF) has opened up the Blue Waters supercomputer to US researchers. The Extreme Science and Engineering Discovery Environment (XSEDE) has launched new open source tools so that scientists can move their research from campus-level computers to national computational facilities. Finally Xsede has joined together with Europe’s Partnership for Advanced Computing in Europe (PRACE), to extend their existing collaboration and support research teams spanning the US and Europe.

The NSF has invited US researchers to apply for run-time on the Blue Waters supercomputer, in an effort to provide computational capability so that investigators can tackle much larger and more complex research problems. Applicants must be able to show a compelling science or engineering challenge that will require petascale computing resources. They must also be prepared to demonstrate that the challenge will exploit the computing capabilities offered by Blue Waters effectively.

Blue Waters is a petascale system, consisting of 237 racks of Cray XE6 nodes, 32 racks of Cray XK7 nodes with NVIDIA GK110 Kepler GPUs and over 25 petabytes of usable online storage. The system is located at the US National Centre for Supercomputing Applications (NCSA) at the University of Illinois. The NCSA and Cray are currently testing the functionality, feature, performance, and reliability of the system at full capacity. As part of these tests, a representative production workload of science and engineering applications will run on Blue Waters. In turn, this will improve the ability of the Petascale Computing Resource Allocation (PRAC) team to use the system at full capacity.

By launching its new open-source software tools, XSEDE also aims to help researchers use larger HPC facilities to complete their research. The Basic XSEDE Compatible Cluster Software Stack is a set of tools to allow system administrators to install the current XSEDE cluster software on their local campus or lab cluster.

One of the problems with the academic computing systems currently available in the US is they are very different to the HPC systems connected to the XSEDE platform. This makes moving data from one system to another challenging.

This new software package hopes to address this problem by allowing systems with very different software setups to be operated with one software solution. A command that works on an XSEDE cluster will work in a similar way, at least for the open-source software components, on a local cluster set up with basic XSEDE-compatible capabilities.

This is designed to make the migration of data easier from campus or clusters to national HPC resources as Rich Knepper, manager of Campus Bridging and Research Infrastructure at IU and XSEDE’s campus bridging deputy manager highlighted. “The new software tools are designed to make everything easier: moving data, submitting jobs, sharing and collaborating and learning and remembering the commands,” he said.

Finally, XSEDE and PRACE have announced that they are exploring options to increase their support for collaborating research teams spanning the US and Europe. They have issued a call for setting up the required interoperability if a clear benefit is to be expected from the new facilities.

The intended aim behind this is to give interested research teams the opportunity to express their interest in the enhancement of interoperability and propose collaborative support opportunities with both Xsede and Prace.

The selected proposals will receive support both from Prace-3IP project and XSEDE in order to improve the interoperability. For testing the implemented solutions on the PRACE and XSEDE systems time can be requested as part of the proposal.

This story appears here as part of a cross-publishing agreement with Scientific Computing World.