Analyze This: Is Exascale On Track?

When companies are developing their strategic investment plans, they often turn to the industry analysts as a sounding board and sanity check. The industry analysts can offer a unique, big-picture perspective gathered from their work with numerous companies and across multiple countries. They not only serve as historians to record and analyze what happened in previous years, but they also help us prepare for new trends and market conditions.

The Exascale Report caught up with Earl Joseph, a Vice President at IDC and the Executive Director of the HPC User Forum to get his perspective on what’s really going on with exascale development.

The Exascale Report: IDC has a good perspective on what’s really going on with exascale development. First off, do you think the goal of achieving an exascale class system by 2018 is realistic?



Earl Joseph: Exascale systems will happen in stages, the same as with petascale systems. First will come a peak exascale system, then a Linpack exascale system, and then a system that achieves sustained exascale performance on a real-world application. 

A full exascale system by any of these definitions is possible by 2018, but it may not be a “perfect” system, in that it may require more power than desired, it may not fit into the targeted spatial envelope, and it may be more special use than general purpose. But in any case it will provide a new level of capability. 

However, some of the desired attributes in Exascale computers may not happen by 2018:

  • Power requirements are important. It will be very useful if the total power required is at least reasonable for a large data center. While 10 to 20 MW is clearly desired, 30 MW or higher may be needed for the first exascale systems.
  • Ease-of-use is important. Exascale systems should be able to be used by more than just the brightest programmers. And they should run more than just a few applications at a large scale. The early systems will be able to deliver tremendous results running larger iterations of smaller problems, but it’s an open question concerning how many large-scale applications will be running in 2018.
  • Reasonable costs are also important. We expect that the first exascale systems will cost $500 million or more each, if you include the unique R&D applied to the system. Within a few years, the costs should be well under $200 million per system.

TER: Can you point to an exascale-related activity that you would label as ‘impressive’ – some early effort that shows true promise or something creative that deserves to be called out?



Joseph: The DOE is looking at some very interesting application areas and is investigating the broad aspects that will be required to make exascale systems more widely useful, like storage, data, middleware, new applications, new memory types, etc. NSF is also making great progress as they bring up Blue Waters and prepare a broad set of users to use the next round of large petascale-class systems. Riken is taking an interesting approach by putting the researchers and computer scientists together in the same location to help develop new synergies and ideas. 

One litmus test for “impressiveness” will be when multiple real-world codes can be run across a substantial fraction of an Exascale machine, by say 25% or more.

TER: In your opinion, does any one country have an edge in leading the race toward exascale class systems?

Joseph: A few years ago I would have said that the U.S. was going to be the clear leader, but today China has clearly entered the scene and is growing its knowledge base and capabilities very quickly. Japan may also have a few more surprises given its long history in HPC. In addition, Europe may put into place a different strategy to provide deep leadership in how larger HPC systems are used to make their scientists, engineers and researchers far more productive.

TER: Is all the focus on exascale causing us to shortchange necessary development efforts for sustained petaFLOPS?

Joseph: Yes and no. Having a goal to create the next generation of very large supercomputers helps to raise all boats. It’s a critical first step. An order-of-magnitude increase in the capability of HPC systems can redefine many things in basic scientific research and understanding, all the way to designing better products.

At the same time, an over focus on hardware would be a mistake, and we need to find ways to significantly change how applications work on large systems. The problem isn’t so much how you get a few applications to scale to, say, a million cores, but how you get a thousand existing applications to scale 1,000 fold larger — some of which may run on only one or a few cores today.

TER: On the topic of global cooperation, so much exascale development effort seems to be focused on Europe with various collaborative research labs. Why Europe?

Joseph: Europe is clearly becoming a center point in Exascale development with many labs and initiatives underway. Europe recognizes the importance of HPC in being a scientific leader and in growing its economies. Europe also has major strengths in the use of HPC, in its HPC centers, in mathematical and algorithm development, in developing better software, as well as strengths in many application domains.

TER: What do you see as the primary applications benefiting from the power of an exascale system?

Joseph: Initially it will likely be the traditional large scale applications that are used today, but the additional promise will come from applying exascale systems more broadly to build better products, to better understand core materials, and to create entirely new types of products, in medicine and healthcare, etc. 

Many new areas will open up with exascale computers. Imagine having a financial simulator that can simulate the world’s economy at a level to show when major recessions are coming ahead of time? And, then having the ability to “test” possible solutions to see what the best approach would be before spending trillions of dollars on an untested solution?

NOTE: Readers may also be interested in the video interview conducted with Earl Joseph and Steve Conway at SC10, discussing their recent study: A Strategic Agenda for European Leadership in Supercomputing — HPC 2020. The report is available as a free download.

For related stories, visit The Exascale Report Archives.