Sign up for our newsletter and get the latest HPC news and analysis.

The most dangerous answer…

…from a computer program, as a professor at my alma mater used to say, is an answer that looks “about right.” Dan Reed has an interesting post on his blog right now on accuracy in scientific applications

The parallel application contains millions of lines of code, combining multiple models of physical, engineering, biological, social and/or economic processes, operating over temporal and spatial scales that span ten orders of magnitude. It was written by tens or even hundreds of graduate students, post-doctoral associates, software developers and yes, even a few professors, over a decade. It involves numerical libraries and functions from diverse research groups and companies, and a single execution requires thousands of hours on tens of thousands of processor cores. In short, it’s a typical example of an extreme scale high-performance computing code.

…Are you afraid? We all should be. It is time to embrace the scientific process for computational science. We must view the execution of a large, multidisciplinary code as what it is – an experiment, with all the possible error sources attendant with any physical experiment. This includes repeating the experiment (computation) to determine confidence intervals on the answer, conducting perturbation studies to determine the sensitivity of the answer to environmental (hardware and software) conditions, identifying sources of experimental bias and defining the experiment rigorously for independent verification.

Those are the beginning and ending paragraphs. The stuff in the middle is even better; I recommend a read.

This also ties in nicely with one of the primary recommendations of  the International Assessment of Research and Development in Simulation-Based Engineering and Science (SBE&S), released late last month, and which I summarized for HPCwire here.  A representative snippet from that report on this topic

A report on European computational science (ESF 2007) concludes that “without validation, computational data are not credible, and hence, are useless.”…The data and other information the WTEC panel collected in its study suggests that there are a lot of “simulation-meets-experiment” types of projects but no systematic effort to establish the rigor and the requirements on UQ and V&V that the cited reports have suggested are needed.

Resource Links: