OCF Deploys 181 Teraflop “Grace” HPC Cluster at University College London

Print Friendly, PDF & Email
Grace Hopper

Grace Hopper

Researchers from across University College London are now benefitting from “Grace,” a new 181 Teraflop HPC system named in honor of pioneering computer scientist Grace Hopper. Designed and integrated by OCF in the UK, the Grace cluster integrates Lenovo and DDN technology to provide HPC services alongside UCL’s existing HPC machines, Legion and Emerald.

Researchers across Neuroscience, Engineering, Environmental Sciences and Biological Sciences have joined those from maths and physical sciences to take advantage of the new system. Grace is already contributing to research into environmental air quality and nanoscale chemistry for electronics through to chemical binding to develop better drugs and the development of algorithms for managing genomes.

“In our research we employ high performance computing facilities to solve the complicated quantum mechanics equations that govern molecular crystals, which play a vital role in understanding the physics and chemistry of other planets, said Dr. Sam Azadi, Research Associate, Department of Physics and Astronomy, UCL. “The availability of Grace and its thousands of cores is opening up new possibilities for our research – understanding molecular crystals’ behavior under exotic conditions is crucial for modeling the structure, dynamic and evolution of the large planets.”

Housed at VIRTUS, the UK’s first shared data centre for research and education in Slough, Grace will be UCL’s flagship HPC service and is free at the point of use for researchers across the University. Usage tends to be dominated by the maths and physical sciences space – in part due to application suitability for parallelization. Although only going live in December 2015, the new system has already seen more than 50 users so far, with more expected in the coming months.

Grace is the result of continued focus on support for world-class computational science and research at UCL,” says Clare Gryce, Director, Research IT services at UCL. “The system replaces the Iridis3 service previously provided though our founding partnership in the Science and Engineering South consortium. At 181 Teraflops, Grace provides a step change in computational capability to researchers at the University.”

Grace uses Lenovo NeXtScale servers connected to DDN SFA7700, providing 500TBs of usable data between home and scratch – 10% of which is already in use. The HPC machine will provide core services across UCL faculties, and operates the same software environment as the Legion system, enabling researchers to migrate between systems easily.

“Using a standardized software environment across Grace and Legion means researchers migrating their work between machines can do so easily, without the need to re-engineer applications,” commented Dr. Owain Kenway, Research Computing Analyst at University College London. “Grace is the sister machine to Legion – the two machines service different types of workloads, they’re architecturally different but using a standardized software stack enables us to drive the maximum value and usage from both machines.”

Owain adds: “Grace and Legion will continue to be updated and upgraded in a leapfrog process each year.”

On working with OCF, Owain Kenway comments, “OCF has successful partnerships with world leading vendors so were effective and responsive in bringing together the different partners to address any problems. They did a great job as broker, mediator and integrator, and I really enjoyed working with their technical team.”

Clare Gryce concludes, “When you get down to building and commissioning the machine, it’s about the collaboration between UCL and OCF’s technical team. It’s the quality of the relationship between the teams that is important. OCF has the right people, the expertise and resources to provide services, support and consultancy in addition to the hardware and software solutions that we require.”

The Grace cluster will be dedicated on April 6, 2016.

Sign up for our insideHPC Newsletter