Andrew Jones has once again published his monthly insight into all things HPC on ZDNet UK. This month, he touches on a widely known, but seldom discussed topic in high performance computing. Supercomputers lie! Given the progression of technology is HPC, we can easily get caught up in the “speeds and feeds” speak. Our new machine runs the same algorithm some N times faster than our old machine. Indeed, the value of `$> time my_app` is most likely correct, but what about the numerical results?
Many users of models are rigorous about validating their predictions, especially those users with a strong link to the advancement of the model or its underpinning science. But, unfortunately, not all users of models are so scrupulous.
They think the model must be right — after all, it is running at a higher resolution than before, or with physics algorithm v2.0, or some other enhancement, so the answers must be more accurate. Or they assume it is the model supplier’s job to make sure it is correct. And yes, it is — but how often do users check that their prediction relies on a certified part of parameter space? [Andrew Jones]
Before you hit ‘send’ on that flaming comment, we realize that there are probably very few applications analysts that are less than reputable. Given the complexity of many modern high performance computing applications, bugs will exist and precision errors can occur without deliberate provocation. In mentoring the younger additions to our community, I always offer up two important pieces of advice: First, always check your results/precision and second, never eat the yellow snow.
As always, Andrew’s article is a great read. Head over to ZDnet UK and read it here.