Are We Shortchanging Ourselves on Basic Research in the Aggressive Race to Exascale?

Print Friendly, PDF & Email

One thing that rings loud and clear as I talk with many of the community leaders regarding exascale is that everyone has an opinion. I write this not to present an opinion — but to raise a concern.

I recently tried to absorb at least some of the content in the recent report, “Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology,” prepared by The President’s Committee of Advisors on Science and Technology (PCAST).

First, if you have ever tried to work your way through one of these reports, then you know it’s quite a challenge. The nature of the language requires one to think about the purposeful use of certain words and terms. The conclusions from a tremendous amount of research are presented with what appears to be very careful posturing. But filtering through that, these recommendations, if acted upon, create a strong sense of where the nation needs to place our priorities within this scope of technology research.

This particular line, from page XIII of the Executive Report section, jumped out at me.

“At the same time, new investments must not supplant continued investment in important core areas such as high performance computing, scalable systems and networking, software creation and evolution, and algorithms, in which government-funded research is making important progress.”

This addresses my concern. We can’t run the race to exascale at the expense of critical basic research that is needed to improve our short-term use of HPC systems at the petascale, nor can we just start “designing” exascale systems based on the models and architectural approaches that currently are in vogue. We can’t shortchange the glaring need for pure research – and the stated goal of 2018 for an exascale system raises the concern that we might not be allowing adequate time for the necessary research. In other words, we may be putting the “D” before the “R” in good old basic R&D.

The “race” to exascale, now fueled even more by efforts in China, Russia, Japan and Europe, create this false sense that technology leadership is based on the list of the Top 500 computer systems. The report addresses this concern several times, again, with the emphasis on the importance of fundamental research — not moving ahead just to accomplish meaningless milestones. Here is another key excerpt:

“Although it is important that we not fall behind in the development and deployment of HPC systems that address pressing current needs, it is equally important that we not allow either the funding allocated to the procurement of large-scale HPC systems, or undue attention to a simplistic measure of competitiveness, to “crowd out” the fundamental research in computer science and engineering that will be required to develop truly transformational next-generation HPC systems.”

And it ends with this profound statement.

“To lay the groundwork for such systems, we will need to undertake a substantial and sustained program of fundamental research on hardware, architectures, algorithms and software with the potential for enabling game-changing advances in high-performance computing.”

A “substantial and sustained program of fundamental research” is a far cry from trying to build exascale systems based on what we know today.

My SC09 colleague William (Bill) Gropp, who was a member of the working committee that helped draft the report, addresses this as well in his recent blog post from NCSA.

According to Bill, “the point is that we need more research for Exascale — we shouldn’t rush to build the best we can with what we know now — rather, we should invest in trying a number (not just one!) of high risk but high payoff approaches to Exascale.”

Bill and I are on the exact same page when it comes to the critical importance of R&D. Bill comments, “The short form is we need more ‘R’ before we start the ‘D’. The challenge is that if you want an Exascale system by 2018, you don’t have much time for the ‘D’, so there’s a tendency to just get started on this.”

And he continues, “Let me note that this does not mean that things like the software stack projects need to wait. Rather, they need to start now but focus on the research issues that are expected to match likely Exascale hardware (the success or failure of such software efforts will inform and affect the hardware).”

In summary, doing things the way we’ve always done them is not going to get us there.

We need heavy investment in exascale research — and for all the right reasons. NOT for bragging rights on a list of very fast computers, but for the importance of game-changing, world-changing scientific discovery. The unprecedented level of research requires a long-term, steady commitment to R&D. We can NOT move forward aggressively with exascale development efforts without doing the proper research. That research will impact today’s challenges in achieving sustained for sustained petascale computing and will enlighten us all to new approaches, new designs, and a new way of thinking.