ARCHER is now the UK’s Most Powerful Supercomputer

Print Friendly, PDF & Email

images1-300x85A new generation supercomputer, capable of more than one million billion calculations a second, has been officially inaugurated at the University of Edinburgh.

The £43 million ARCHER (Academic Research Computing High End Resource) system will provide high-performance computing for research and industry in the UK. ARCHER is the most powerful HPC system in the UK and has 19th place in the most recent TOP500 – the list of the 500 highest performing supercomputers published in November 2013.

The systems will help researchers carry out complex calculations in areas such as simulating the Earth’s climate, calculating the airflow around aircraft, and designing novel materials.

The system, at the University’s Advanced Computing Facility at Easter Bush, has up to three and a half times the speed of the HECTOR supercomputer system, which it replaces. ARCHER’s twin rows of black cabinets are supported by the newly installed UK Research Data Facility. The system brings together the UK’s most powerful computer with one of its largest data centres. This creates a facility to support Big Data applications, which has been identified by the UK Government as one of its Eight Great Technologies.

The building housing the ARCHER system is among the greenest computer centres in the world, with cooling costs of only eight pence for every pound spent on power.

ARCHER is funded and owned by the Engineering and Physical Sciences Research Council (EPSRC). The machine uses Cray’s XC30 hardware, while its Intel Xeon E5-2600v2 processors provide performance, scalability, and maximise energy efficiency.

Archer consists of 3008 compute nodes, each one featuring two 12-core E5-2697 v2 (2.7 GHz) Ivy Bridge processors. There are two configurations of compute nodes contained with the Archer system; standard nodes contain 64GB shared between the two processors and a small number of high-memory nodes that contain 128GB of memory.

The memory is arranged in a non-uniform access (NUMA) setup, each 12-core processor is a single NUMA region with local memory of 32 GB (or 64 GB for high-memory nodes). Accessing the memory associated with a NUMA region provides lower latency than accessing memory in another NUMA region.

The Cray XC30 compute nodes are connected together using the Aries interconnect system in a Dragonfly topology. Four compute nodes are connected to an Aries router, 188 nodes are grouped into a cabinet and 2 cabinets make up a group. Archer has 84 optical links per group which enables a peak bisection bandwidth of 7,200 GB/a over the whole system. The MPI latency on Aries is approximately 1.3μs with an additional 100ns of latency when communicating over the optical links.

Professor David Delpy, CEO of the Engineering and Physical Sciences Research Council, said: “EPSRC is proud to unveil this new ARCHER service. It will enable researchers in engineering and the physical sciences to continue to be at the forefront of computational science developments and make significant contributions in the use of Big Data to improve understanding across many fields and develop solutions to global challenges.”

This story appears here as part of a cross-publishing agreement with Scientific Computing World.