Sign up for our newsletter and get the latest HPC news and analysis.

End-users Need to Design Exascale Computers

 

It’s not size that counts but what you do with your supercomputer, delegates to the GPU Technology Conference in San Jose were told on 16 May.

End-user scientists and engineers need to get involved from the outset in the design of the next generation of machines – expected to be capable of delivering performance in the Exaflop range by the end of this decade – if they want the machines to produce useful scientific and engineering results.

Historically too much emphasis has been put on supercomputer hardware and not enough attention paid to the application software that would run on the machines to produce the results that scientists and engineers want. So during a session on ‘Exascaling your apps’, Steve Scott, the chief technology officer of Nvidia, which organised the conference, warned that if exascale machines were to have a broad impact: ‘We need a wake-up call.’ He did not think that system software would be an issue in the Exascale domain but rather: ‘I’m worried about application software.’

At present, no-one quite knows what the hardware will be for a successful Ex scale machine and this opens up an unprecedented opportunity for end users to get involved in ‘co-design’, Satoshi Matsuoka of the Tokyo Institute of Technology told the session. ‘Co-design will be the big key,’ he said. Because the architectures of the processors and the systems for Exascale computing are not yet fixed, he said, co-design is needed to reflect the needs of the end-user applications in the architectures themselves, right from the outset.

He pointed out that even at the Petaflop level, ‘It has been a challenge to run efficient applications. Scaling up has been a challenge because of the transition to many core and heterogeneous architectures.’ Steve Scott endorsed the point saying that although today’s applications will run on a future Exaflop machine: ‘Will they run well? No!’

The US Department of Energy has already set up three co-design initiatives, according to Jeff Vetter of Oak Ridge National Laboratory. They will be focusing on the areas of Combustion, Materials Science, and Nuclear Power. ‘In the past, the applications teams have been tossed new architectures and told to get on with it,’ he said. Now, in contrast, designers have to tell the users early on what the possibilities might be.

This story originally appeared on HPC Projects. It appears here as part of a cross-publishing agreement with Scientific Computing World.

Resource Links: