Video: Does the TOP500 Need to Evolve?

Print Friendly, PDF & Email

httpv://www.youtube.com/watch?v=9cs1Zs-VuJk

In light of the recent controversy over the validity of the TOP500, I thought it would be good to reprise this interview with Sharan Kalwani on why he thinks the benchmark needs to evolve to catch up with modern HPC. And while this 2011 interview originally appeared on TOP500.org, that page (pictured below) has somehow vanished.

Comments

  1. What is the controversy?

    • As described by Bill Kramer, NCSA did not submit a benchmark result for the Blue Water supercomputer to the TOP500 even though the system would have easily ranked in top ten. The reason cited by Kramer is that the LINPACK benchmark used is not a valid performance benchmark of modern systems and that acquiring supercomputers based on potential TOP500 ranking hurts users.

      Read more: http://www.ncsa.illinois.edu/News/Stories/TOP500problem/

  2. Nevermind… I missed your previous post. Thanks for the info!

  3. There are three distinct points here:

    1) “acquiring supercomputers based on potential TOP500 ranking hurts users” – I agree with this sentiment and am glad NCSA did this.

    2) “the LINPACK benchmark used is not a valid performance benchmark of modern systems” – this is only partly true – LINPACK has ALWAYS been only ONE measure of a system – and in that way is still valid today – but the problem arises because some people treat it not as one of many metrics, but as THE metric.

    3) Neither faulty use of the Top500 (e.g. procurement) nor the use of a single benchmark (LINPACK or any other single metric) mean that the Top500 is useless. The value – and correct use – of the Top500 is 500 biannual data points consistently measured over 20 years – not any one data point.

  4. It can’t really be said to be a controversy when everyone agrees it’s a useless metric to measure capability by.