According to this story by Steve Lohr at the New York Times, a blue-ribbon advisory group report made to the White House last week said that research funding might be better deployed elsewhere than towards an international speed race based on a machine’s performance on a particular number-calculating benchmark such as LINPACK.
In presenting the report last Thursday, David E. Shaw, chief scientist at the investment and technology firm that bears his name, and a member of the advisory group, observed that gaining the top spot on the annual ranking of supercomputers is “an arms race that is very expensive and may not be a good use of funds.”
I think it is interesting to note that the President’s Council of Advisors on Science and Technology report makes no mention of Exascale programs, which may be beyond it’s scope. It does seem to have a bone to pick with the TOP500 though:
If Top500 rankings can no longer be viewed as a definitive measure of a country’s high performance computing capabilities, what goals should our nation be setting for fundamental research in HPC systems, and what criteria should be used in allocating funding for such research? Given the natural inclination to quantify the relative performance of competitors in any race, there is a temptation to replace the traditional FLOPS-based metric with another fixed, purelytive metric (or perhaps two or three such metrics) that policymakers can use on an ongoing basis to rank America’s competitive position in HPC relative to those of other countries. This approach, however, is subject to several pitfalls that could both impair our ability to maintain our historical leadership in the field of high-performance computing and increase the level of expenditures required to even remain competitive.
First, it is no longer feasible to capture what is important about high-performance computing as a whole using one (or even a small number of ) fixed, quantitative metrics, as a result of:
- the progressive broadening of our nation’s requirements in the area of high-performance computing;
- the consequent “splintering” of the set of computational tasks required to satisfy these requirements;
- a wide range of substantial advances in the various technologies available to perform such computational tasks; significant changes in the “bottlenecks” and “rate-limiting steps” that constrain many high-performance applications as a result of different rates of improvement in different technological parameters.
The rest of the 120 page report is an interesting read so far, but there is a lot to go through so I promise to follow up with some armchair analysis.