Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Is Europe Leading the Energy Efficiency Race?

Europe is notorious for its high energy costs. In the first of three articles on energy efficiency in high-performance computing, Tom Wilkie from Scientific Computing World asks if that’s why so much of the initiative appears to be coming from Europe.

Tom Wilkie, Scientific Computing World

Tom Wilkie, Scientific Computing World

On Monday of this week, PRACE, the European partnership for advanced computing, announced that it has chosen industrial partners who will start work in May to develop more energy efficient supercomputers. Separately, the European Research Council (ERC) has awarded a prestigious Starting Grant, under the EU’s Horizon 2020 program, to a researcher at the Barcelona Supercomputing Centre to tailor computer workloads to reduce energy consumption.

At the High-Performance Computing and Big Data Congress, held in London on 3 February, two commercial companies, Verne Global and Hydro66, were assiduously offering data-centre services to support the requirements of HPC workloads. Significantly, both are based in Europe, in Iceland and northern Sweden respectively, and offer services that draw on renewable energy – a mixture of geothermal and hydroelectric in the case of Iceland and hydroelectric power for Sweden.

The further attraction of the northern latitudes is that they have low ambient temperatures that will lower the cost of cooling. Both countries have high-speed data connections. Sweden in particular has a well-developed fibre infrastructure across much of the country and ranks second in Europe for FTTH/B broadband deployments, according to the FTTH Council Europe.

In enterprise computing too, the focus has turned to Europe. Last month, Apple announced that it would build two data centers in Europe, costing a total of €1.7 billion and each powered entirely by renewable energy. All these initiatives come hard on the heels of another European announcement last month, that Lenovo and the UK’s Hartree Centre are to collaborate on a joint research project, using ARM processors and improved software to improve the energy efficiency of high performance computing.

On the other side of the world, the cost savings from energy efficient computing were highlighted in an announcement at the end of February from Green Revolution Cooling that Australia’s leading seismic exploration services firm, DownUnder GeoSolutions, had reduced its cooling energy needs by 90 per cent as a result of deploying GRC’s oil immersion server cooling system.

GRC partnered with SGI to deliver energy-efficient HPC clusters providing 8 PFlops of computing and one-tenth of the normal cooling requirement. Separately, CoolIT Systems, which specialises in Direct Contact Liquid Cooling technology, announced that the Demand Liquid Alliance (DLA) industry group, which it set up three years ago, is to amalgamate with The Green Grid Association, the leading international group promoting resource-efficient information technology and data centers.

The most ambitious effort among these recent initiatives is perhaps the ‘pre-commercial procurement (PCP)’ announced yesterday by PRACE. The project is to obtain R&D services which should result in future PRACE HPC systems becoming more energy efficient. This the second phase of the programme. The awards are highly competitive and, during Phase I, the European computer-making companies Bull, E4, Maxeler, and Megware explored various possible solutions. All of them have now been invited to bid for Phase II during which prototypes for the three most promising solutions will be built. This phase is expected to start in May 2015.

This Pre-Commercial Procurement is the first one in the field of HPC in Europe. It is a multi-country and multi-partner joint effort, implemented by a consortium composed of several partners of Prace: CINECA (Italy) as Procuring Entity with CSC (Finland), EPCC (UK), JSC (Germany) and GENCI (France).

In August 2012, Mark Parsons (EPCC) and Dirk Pleiter (FZ-Jülich) set out the ambitious goals for the program at a PRACE workshop in Brussels. PRACE wanted nothing less than a ‘whole system design’ for energy-efficient HPC, and they set out three key components to this design.

The first component was the creation of energy efficient computer systems, where they expected manufacturers to draw not only on existing European expertise in HPC but also on the expertise of the embedded computing sector. The second part of the PRACE program, they continued, was ‘extreme cooling efficiency’ where they looked to suppliers to develop next-generation cooling technologies, going beyond direct liquid cooling and expanding the range of components that benefited from cooling. The third leg was ‘systemware efficiency’ – the development of new operating systems and advances in system software that would support energy efficiency, scalability, and resiliency.

On the cooling side, the aim is to enable free-cooling with maximum heat reuse. They gave the example of a cooling subsystem where at least 90 per cent of the heat was removed by liquid and where the system can be operated at a liquid outlet temperature of at least 45℃.

Even though this inaugural workshop took place nearly three years ago, Parsons and Pleiter had an eye to what is now being termed, by some, data-centric computing. They specified that new systems developed under the PRACE initiative should have an architecture that integrates a large number of manycore (perhaps heterogeneous) devices but emphasized that the computing devices must be tightly coupled to network and storage devices: ‘More so than today,’ they said.

The system building blocks should be configurable for compute or data intensive work and consideration should be given to the system software stack for data access and management — for example, integrating storage class memory devices into a parallel file system. ‘We want a new type of scalable modular system that demonstrates a balance of numerical and data intensive computing,’ they concluded.

This story appears here as part of a cross-publishing agreement with Scientific Computing World.

Resource Links: