Tom Wilkie from Scientific Computing World writes that the mood at PRACEdays15 in Dublin was rightly positive, but there may be a paradox at the heart of European HPC policy.
The strengths and the limitations of Europe’s supercomputing strategy were laid out at the PRACEDdays15 conference in Dublin at the end of May, with the minds of many delegates concentrated by the announcement in the USA over the course of the past few months of the $425 million ‘Coral’ procurement, intended to develop supercomputers that will leapfrog the international competition and open up the way to an Exascale machine.
Airbus, the international aerospace company based in Europe, has an almost insatiable appetite for high performance computing, it became clear from Eric Chaput’s presentation to a satellite meeting reviewing European Exascale projects. The ultimate goal of Airbus is to simulate an entire aircraft on computer, according to Chaput, who is senior manager of flight-physics methods and tools at Airbus. This prompted the observation from one audience member that Exascale would not be enough for Airbus, but rather it needed Zetascale or even more powerful computers.
The requirement for ever more powerful computing resources in Europe was powerfully reiterated by Sylvie Joussaume in the course of the panel discussion that concluded the conference. This time, the emphasis was on research for the public good rather than commercial benefit. Dr Joussaume, the chair of Prace’s Scientific Steering Committee in 2015, is a senior researcher within the French CNRS and an expert in climate modeling. She stressed that European climate researchers needed access to the next generation of the most powerful machines if they were to maintain their expertise in the subject.
In the USA, the Department of Energy has announced that three next-generation systems are to be built as part of the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) by two consortia, headed by IBM and Intel respectively. But while the US Department of Energy can act as a single national purchaser of computer systems and the US Government has a history of using procurement contracts successfully to subsidise research and development by private industry, Europe is a much looser collaboration of individual nation states and the European Commission is not permitted a budget that would allow it to place heavily subsidized contracts with a European supercomputer vendor in the same way as the US Department of Energy appears to be free to do for US high tech companies.
Within the limits of what is permitted, nonetheless, the European Commission (EC) presented a cogent program of work on high-performance computing and exploring next-generation Exascale technologies. The head of the e-infrastructure unit at the Commission, Augusto Burgueño Arjona, told the meeting that a high-performance computing strategy was an essential building block to meet the aims of Europe’s Digital Single Market. There was a need to develop infrastructure for innovation combining Cloud, HPC, and Big Data and, he assured his audience, Prace was seen as a fundamental part of that.
The European Commission’s policy will face its first significant test later this summer. Technically, PRACE, the Partnership for Advanced Computing in Europe, is coming to the end of its first phase. It has allowed researchers access to some of the most advanced computing resources in the world, even if they happen to live and work in countries that have only very limited national computing facilities. Currently, four hosting countries – Spain, France, Germany, and Italy — allow researchers from other European member countries run time on their premier national facilities.
PRACE was funded in 2010 for a five-year period, with Germany being the first hosting nation to provide access to other European users, followed by France. Their provision of compute cycles under phase one of Prace is therefore coming to an end. Due to complications arising from the economic crash, Spain and Italy took slightly longer to join and so they will be providing compute cycles for longer – possibly through to 2017.
Although everyone agrees that PRACE has been a scientific success, some of the hosting nations feel that the current funding arrangements have been a little unfair. For the next phase, it was expected that the European Commission would provide funds to reimburse to the hosting nations at least some of the operational expenditure for Prace jobs running on the national machines. But it appears that this may not happen, so it may have to be arranged at the level of the Prace membership rather than the Commission. Funding the next phase of the project therefore will be tricky, especially to ensure that the arrangements are transparent to all.
A European approach to buying supercomputers?
Moreover, while the EC leverages what member states contribute, in a federated approach, there was a growing recognition, Burgueno suggested, that with the changes taking place in science and industry, there are many needs for different sorts of infrastructure that were not needed before. In this context, it was also realized that the member-state/individual approach might not be enough to create a European approach that was both effective and economic.
The Commission would be monitoring the HPC market and R&D landscape in Europe in the course of this year and would report to the Council of Ministers and the European Parliament by the end of 2015 on the steps that should be taken after that. In addition, the European Strategy Forum on Research Infrastructures (ESFRI) was being invited to look at the issue and to propose ways of better coordinating the investments being made by individual member states. The ESFRI was set up in 2002 to develop the scientific integration of Europe and to strengthen its international reach by supporting a coherent and strategy-led approach to policy-making on research infrastructures in Europe, and to facilitate multilateral initiatives leading to the better use and development of research infrastructures. ‘We put high hopes on ESFRI to do a good job,’ Burgueno said.
The infrastructure to support research and innovation in Europe needed world class computing capability, Burgueno assured his audience. While the European Commission is responsible for disbursing public money and therefore needed to be careful not to be seen to be favouring one commercial company over another (something that does not appear to trouble the USA in its procurement of the Coral project), Burgueno did point out that some 700 million euros would be available through the EU’s Horizon 2020 research program for public-private partnerships with the commercially led European Technology Platform for HPC (ETP4HPC).
The commission also had a PPI program – Public Procurement for Innovation, which was intended to bring together the various bodies responsible for procurement in the individual member states and encourage them to work together. The funding is modest – around 20 million euros – but he expressed the hope that it would encourage the member states to come together to carry out pre-Exascale procurement. The Commission is developing the idea of a demonstrator project to integrate the various technologies that have been developed so far into an extreme-scale fore-runner machine. This would be on a timescale of 2018 to 2020.
Paradox in Europe’s policy?
Nonetheless, there appears to be a paradox hovering in the background of the EC’s policy and strategy. The Commission is clear that Europe researchers need a world-class computing infrastructure – and that this will mean access to Exascale machines when they become available. Such machines will be needed by academic scientists if European science is to be of international standing, but they will also be needed by industrial scientists and engineers to foster innovation if European industry is to remain internationally competitive. The case of Airbus, cited at the beginning of this article demonstrates that neatly.
The Commission is spending public money on projects to test and demonstrate technologies that might form the basis for Exascale computers. Such projects include Mont Blanc at the Barcelona Supercomputing Centre in Spain, and DEEP which is coordinated by the Jülich Supercomputing Centre in Germany. Through Prace and other mechanisms it is promoting the use of advanced computers and training engineers and scientists on how to write the software that will run on them.
The Commission is also spending public money on Public-Private partnerships and centers of excellence together with the ETP4HPC – intended to further both hardware and software for Exascale computers.
Yet, all this money, derived from European taxpayers, may in the end create intellectual property that will be captured for the profit of commercial companies that are not European. On the commercial and industrial side, it is much less clear whether the Commission has any policy to foster the development of a European computer vendor that could rival, for example, the US consortia engaged on the Coral project.
In answer to a question from the audience, Burgueno reiterated that as a public body, the Commission was not in the business of ‘picking winners’ among commercial companies.
Indeed, he refused to rule out the possibility that the European institutions charged with procuring the next generation of computers might opt for non-European vendors.
This is the fourth in a series of reports from PraceDays15, held in Dublin last week. Robert’s Roe‘s report on the role of HPC in the host country, Ireland, can be found here. On a similar theme, Tom Wilkie writes about how supercomputing for small companies can be made simple. The reasons why parallel programs need new maths are explained by Tom Wilkie in the first report from the conference.