Wayfinding in the HPC transition

Print Friendly, PDF & Email

My friend Andy Jones of the Numerical Algorithms Group (purveyor of fine softare and colorful marbles emblazoned with the iconic NAG logotype) has posted the first of a multi-post series on the future of technical computing. In it he rightly points out that while GPUs are getting a lot of attention these days, they are at present more potential than promise, and we’ve been through this before.

By way of illustration he provides us with an anecdote from his youth in which he ports and HPC code to a PC and finds it runs faster (per processor) on his dekstop. Why?

The answer then (and now) was that I was extrapolating from only one application, and that application could be run as lots of separate test cases with no reduction in capability (i.e. we didn’t need large memory etc, just lots of parameter space).

…Why do I foist this reminiscence on you? Because the current GPU crisis (maybe “crisis” is a bit strong – “PR storm” perhaps?) looks very much the same to me. The desktop HPC surprise of my youth has evolved into the dominant HPC processor and so for some years now, we have been developing and running our applications on clusters of general purpose processors – and a new upstart is trying to muscle in with the same tactic – “look how fast and how cheap” – the GPU (or similar technologies – e.g. Larrabee, sorry Knights-thingy).

I encourage you to click over to his post to get the rest of the story, as that guy on the radio would have said.