Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


HPE Powers Research at DiRAC HPC Facility

In this special guest feature, Bill Manell from HPE writes that DiRAC and the University of Leicester partnered with HPE to implement next-generation HPC solutions to advance scientific research in astrophysics, particle physics, cosmology, and nuclear physics.

With a UK mission to support simulation and modeling across all branches of theoretical astrophysics, particle physics, cosmology, and nuclear physics, the DiRAC high-performance computing (HPC) facility has gained prominence as a world-class provider of HPC services.

DiRAC provides multiple services tailored to particular science workflows: The DiRAC Data Intensive service is hosted by the Universities of Leicester and Cambridge, and the Extreme Scaling service is hosted at the University of Edinburgh and Durham University hosts the Memory Intensive service.

We’re simulating systems on all scales, from the smallest subatomic particles to the largest clusters of galaxies and everything in between,” said Dr. Mark Wilkinson, Associate Professor of Theoretical Astrophysics at Leicester and director of the DiRAC facility. “DiRAC researchers study many complex problems such as the formation and evolution of planetary systems, and the structure of matter itself.”

Cutting-edge science requires cutting-edge equipment

DiRAC needed high-performance computing solutions that can efficiently perform the complex calculations required for its simulations of gravitational waves, planet formation, and other phenomena, as well as processing and storing the petabytes of data generated by its users. A new generation of supercomputer was required.

In this video, Mark Wilkinson describes how DiRAC and HPE are elevating theoretical physic and astrophysics.

But the DiRAC community also knew that cutting-edge computers wouldn’t be enough. It also needed an IT partner that could help design a system that met its unique research requirements.

At HPE, we’ve seen many organizations struggle with technology that couldn’t handle enormous volumes of data. As a national facility serving a large research community, DiRAC needed HPC solutions that could manage complex workloads without the risk of downtime. It also needed more computing power, better storage capabilities, and faster processing to allow its researchers to compete internationally.

Launching Apollo

To support its research efforts and better manage its data-intensive calculations, DiRAC installed the HPE Apollo 6000 Gen10 system with Intel® Skylake chips at the University of Leicester, complemented by a 6TB HPE Superdome Flex server to accommodate large, in-memory calculations. At the University of Edinburgh, the DiRAC Extreme Scaling service deployed an HPE SGI 8600 system, taking advantage of its hypercube topology to enhance complex particle physics calculations. The Edinburgh system was further accelerated by incorporating NVIDA GPUs.

The Apollo 6000 system supports cutting-edge high-bandwidth, low-latency interconnected technologies that provide agility and computational balance for some of DiRAC’s most demanding applications. The HPC environment also provides enhanced storage and improved processing capabilities which makes it possible to complete scientific projects and generate research insights faster, thus reducing “time to science” for users and giving DiRAC researchers an edge over their competitors.

The HPC technical support team at the University of Leicester worked closely with HPC Services experts at HPE to design, deploy, manage, and support its Apollo 6000-based system, which delivers supercomputing for a wide range of complex workloads. The HPE team helped the University install the system, and later helped it upgrade and expand its system from 4,000 to about 14,000 cores.

According to Dr Wilkinson, the process of designing their system started by determining what the scientists who’ll be using it need it to do.

We get the researchers to decide exactly what it is they are going to do in the next three to five years,” he said, “and we translate that into hardware requirements at a high level and then work with the industry partners to translate that into an actual technical solution.”

Achieving liftoff

Since deploying the Apollo 6000, DiRAC researchers have been able to work on projects that would have otherwise been beyond their capabilities.

We are carrying out the first billion-particle calculations of star formation—looking at how a cloud of gas turns from gas into stars,” Dr. Wilkinson said. “That’s something we couldn’t have done on the old system because it just wasn’t powerful enough.”

Dr. Wilkinson believes that DiRAC researchers that use HPE’s supercomputer solutions will be able to more quickly realize the benefits of the software they develop for their scientific projects.

Catalyzing innovation

The University of Leicester is also one of the academic partners in Catalyst UK, a three-year program bringing Arm-based HPC clusters to three leading UK universities. The program was announced last year by HPE as a collaboration with ARM, SUSE, Marvell, and Mellanox and is also supported by the DiRAC facility.

Designed to pave the way for accelerated supercomputer adoption, the program is dedicated to supporting research into future architectures and software as well as developing an open Arm software ecosystem via cooperation with academia and the commercial sector.

For this particular initiative, the University of Leicester implemented an HPE Apollo 70 system. DiRAC users have been impressed with the system and were able to perform complex simulations of planetary collisions on this system pretty much out of the box.

One small step, one giant leap

This kind of investment in increasing HPC adoption may provide an economic boon. Hyperion Research paints a bullish picture for HPC solutions. According to its estimates, every dollar invested in HPC technology could return $551 in revenue and $52 in profit to private-sector firms. The UK government is supporting the Catalyst program in the hopes of creating an environment where UK companies can use HPC solutions to develop innovative and internationally competitive products, which will create higher-paying jobs.

Myriad industries can make good use of high-performance supercomputers; organizations can use the technology to analyze molecular data from cancer patients, design new cars or airplanes, detect fraud in monetary transactions, and produce climate modeling, to name but a few possibilities.

The new DiRAC services are up and running, and companies that are thinking about how they’d like to enhance their products or create new inventions via supercomputer technology are invited to contact DiRAC to discuss opportunities to access the facilities. We encourage you to take part in this exciting endeavor.

hpc and AI

Bill Mannel, VP & GM – HPC and Ai at HPE

Bill Mannel is Vice President and General Manager of High-Performance Computing (HPC) and Artificial Intelligence (AI), for Hybrid IT, Hewlett Packard Enterprise. Bill joined HPE in 2014 and is a seasoned veteran of the servers and high performance computing industry. For Bill’s first three years at HPE, the HPC business grew significantly more than the overall market. HPE acquired Silicon Graphics International Corp. (SGI), a pure-play HPC company, closing the integration in November, 2017. 

Sign up for our insideHPC Newsletter

 

Leave a Comment

*

Resource Links: