More Community Response

Print Friendly, PDF & Email

Community Response Section

In our recent survey, we asked the question, “Do you believe a roadmap exists that will get us to Exascale?”

63% of the survey respondents said, “NO” – they do not believe there is a roadmap that will get us to exascale.

Here are a few of the responses we selected from the community:

< < < < < > > > > >
Well I believe there is a roadmap, but whether it happens in 2019 or 2021 etc. is not in my opinion clear. It depends on funding for the high end.

Kimmo Koski
Managing Director, CSC
(the Finnish IT Center for Science)

< < < < < > > > > >

When you say roadmap what do you mean? A product roadmap details product features and functions. If this is the question the answer is ‘No’ a practical Exascale product cannot be built in the next 3-4 years. If on the other hand you are asking if, with sufficient commercial motivation, the industry could build an Exascale system in the next 10 years the answer is ‘Yes’ but it would require significant R&D investment. Today there is not sufficient commercial motivation to set this as a product goal. For early Exascale systems the likely customer is a government with big science or national security goals. If such a customer stepped up to share R&D risk and commit to building such a system the goal is achievable in 10 years.

Wilf Pinfold
Director of Extreme Scale Computing, Intel Labs

< < < < < > > > > >

I’d guess that a first reaction of many to this survey result – that most respondents don’t believe a roadmap exists now to get us to exascale – is horror – how can we be only 6 years away from a target delivery with a commonly assumed 6 years of R&D required and not have a roadmap in place?

But I’m not surprised by either the survey result, or the lack of a roadmap. Indeed, I’m more surprised by the implied 37% who think a firm roadmap does exist now. Why? Because the leading edge of supercomputing – exascale in this case – has traditionally involved living at the horizon of the technology – creating the roadmap no further ahead than essential. At the same time, we will continuously seek new technologies and plan much further ahead – but that activity is usually far too speculative to be called a roadmap. In theory, this ensures we can move as rapidly as possible towards the next level of performance, flexibly evaluating and assimilating new technologies as they emerge.

Does that mean we don’t need a roadmap? No – we do need to look and plan ahead. But any roadmap we might have this year, I think, should be substantially updated over the next few years to become the roadmap that actually delivers exascale.
There is, of course, the argument that it all depends on what we mean by “exascale”. Do we have now an understanding of the technology developments required now to deploy a system in 2018 capable of achieving >1 exaFLOPS on HPL? Almost certainly, yes. It would cost an unjustifiable amount of money to deploy, to run (power) and to ensure reliability – but we mostly know how it could be done. Do we have now an understanding of the technology developments required to deploy a system in 2018 capable of exploiting peak exaFLOPS capacity with a reasonable degree of efficiency across a range of applications and for a justifiable cost/power? No, I don’t think so – see my comment above about roadmaps.

Perhaps a more critical point is “are we staying on target to have the right level of roadmap as we need it?” That asks questions of the level of funding supporting the R&D towards exascale, the degree of agreement in the community on the best ideas and technologies to explore, the quality of the business case (including the political and wider society aspects) for investment, the engagement with the right range of stakeholders, the involvement of future generations of supercomputing leaders, and so on.

Andrew Jones
Vice President, HPC Business
The Numerical Algorithms Group (NAG)

< < < < < > > > > >

And our final response comes from Dona Crawford, Associate Director for the Computation Directorate at LLNL. Dona was very clear to state that this does not represent the position of her institution, the program that funds the majority of their computing efforts, nor the DOE.

The important question is whether the HPC community will advance the state of the art such that we continue to solve important, long-term, complex problems…and I believe the answer to that is YES. While there is no specific blueprint for what to do when, we understand the challenges and are investigating several technical hardware and software solutions. As a result, we will make progress toward increased processing speeds and improved data manipulation. At some point in the future, we will note that we crossed the exascale threshold and are on to the next 3 orders of magnitude threshold, namely zettascale. That said, the “we” in this response is purposely ambiguous. The HPC community is global and it remains to be seen which countries are the clear leaders of the very near future.

Dona Crawford
Associate Director, Computation Directorate
Lawrence Livermore National Laboratory

For related stories, visit The Exascale Report Archives.