In this special guest feature, Robert Roe from Scientific Computing World reports that a new Exascale computing architecture using ARM processors is being developed by a European consortium of hardware and software providers, research centers, and industry partners. Funded by the European Union’s Horizon2020 research program, a full prototype of the new system is expected to be ready by 2018.
The project, called ExaNeSt, is based on ARM processors, originally developed for mobile and embedded applications, similar to another EU project, Mont Blanc, which also aims to design a supercomputer architecture using an ARM based supercomputer. Where ExaNeSt differs from Mont Blanc, however, is a focus on networking and on the design of applications. ExaNeSt is co-designing the hardware and software, enabling the prototype to run real-life evaluations – facilitating a stable, scalable platform that will be used to encourage the development of HPC applications for use on this ARM based supercomputing architecture.
An impressive number of exascale projects are under development around the work, some of which were discussed as part of Scientific Computing World’s coverage of SC15, the US supercomputing conference and exhibition, held in Austin, Texas, in November last year. At the event, the European DEEP program presented its results and announced that applications were now running on the system. Also on display at the evet was the NEXTGenIO project, led by the Edinburgh Parallel Computing Centre (EPCC) in Scotland, which focuses on the particularly acute problem in exascale computing of developing innovative I/O solutions.
The ExaNeSt consortium consists of industry partners such as Iceotope, a specialist in HPC cooling, and Allinea, which provides software for HPC performance analytics. Other participants include eXactlab, an SME that provides on-demand access to HPC resources; EnginSoft, an Italian based software provider that specializes modeling and simulation. Also taking part are MonetDB Solutions, which provides technical consultancy for the open-source column-based database system, MonetDB, and a French software company that focuses on embedded systems development, Virtual Open Systems (VOSYS).
Finally, and in what might be a crucial step, several research centers from across Europe have also been brought into the project, including the Greek Foundation for Research and Technology – Hellas (Forth). Manolis Katevenis, head of computer architecture at Forth’s Institute of Computer Science (ICS), said: “As project coordinators, we will seek an efficient collaboration of all partners to build the prototypes – as we have done time and again in the past – because only through real, working systems can computing advance to its next stage.”
The Italian Istituto Nazionale di Astrofisica (INAF), which has worked on large scale HPC projects, such as the European PRACE initiative, the Italian HPC consortium CINECA, and the Arc Centre of Excellence for all Sky Astophysics (CAASTRO) through its computational cosmology group, will also be included in the project. Another Italian centre National Institute for Nuclear Physics (INFN) has had a major role in the development and operation of large scale computing systems and facilities since 1984. Both of these institutes bring experience in solving some of today’s most difficult scientific challenges using HPC.
The University of Manchester (UoM) brings its expertise through its Advanced Processor Technologies (APT) research group. The University of Manchester has developed a number of high-tech spin-off companies including ICL Goldrush Database server, Amulet processors (Low-power architectures) bought by ARM, and the Transitive Corporation bought by IBM. Manchester is leading the work on the development of interconnects, and will be contributing to the design, modeling, and development of the interconnection infrastructure, as well as to the analysis of the application needs.
Finally, the German Fraunhofer institute, or more accurately, the Fraunhofer Institute for Mathematics (ITWM) and its Competence Center for High Performance Computing (CC-HPC), will focus on the development of parallel applications and development of HPC tools. The Fraunhofer has long been established with HPC development, having previously worked on projects such as the communication middleware GPI (Global Address Space Programming Interface), the GPI-Space programming environment for parallel and big data applications, and the BeeGFS File System formerly known as FhGFS.
In theory, the easiest way to reach exascale levels of computing would be to simply increase the scale of supercomputers using today’s technology. However, the use of current technology is facing many technical limitations in reaching an exascale architecture. Key barriers to exascale development are energy consumption, storage demands, I/O congestion from increasingly data hungry applications and resiliency as these supercomputers will be on a scale unlike anything in use today.
ExaNeSt aims to address some of these challenges using the intrinsically energy-efficient ARM cores, quiet and power-efficient liquid cooling, non-volatile memory integrated into the processor fabric, and the development of innovative, fast interconnects that avoid congestion.
One important distinction between this consortium and some of the other European efforts is the heavy influence from both embedded systems developers, crucial as they bring an understanding of the ARM architecture and application developers which are absolutely critical for developing software for ARM and then other researchers with HPC experience who can translate that into an application that can scale effectively over an exascale supercomputer.
However, it will not all be smooth sailing, significant roadblocks still remain before exascale supercomputing computing can be fully realized. Among these are hardware redundancy and resiliency, which is needed if one million processors are to work in unison and the challenges of scaling application software to run on exascale supercomputers. Some of these challenges have been discussed previously in Scientific Computing World. One article from last year “Exascale: expect poor performance“ looked at the software challenges and “Will a European company build Europe’s first Exascale computer?“ highlights the challenges facing European supercomputing – as investment continues to lag behind its US and Asian counterparts.