Podcast: John Gustafson on What’s Next for Parallel Computing

Print Friendly, PDF & Email

0:00

 
John Gustafson

John Gustafson

In this podcast from Radio New Zealand, John Gustafson from the A*STAR Agency for Science, Technology and Research discusses parallelism and high performance computing.

Gustafson is the father of Gustafson’s Law, which gives the theoretical speedup in latency of the execution of a task at fixed execution time that can be expected of a system whose resources are improved.

“What Gene Amdahl pointed out in a debate in 1967 was that if you try to throw a lot of processors at one problem, eventually you’ll hit the diminishing returns that some of your problems cannot be done in parallel. And you can actually write that as an algebra formula that says, “Look, if ten percent of your problem is serial, then even if you throw a thousand processors at it, you will never get a tenfold speedup.” So it’s just not worth doing, and for the longest time the industry used that as an excuse for not going parallel, which meant that we had to use faster and faster clock rates and these sequential machines got bigger and hotter and fancier, but they hit the limits of physics. And finally what broke it was my pointing out that people don’t use a computer to do the same size problem that they did twenty years ago–they keep doing larger and larger problems. The problem size always expands to what you can do. It’s just like you wouldn’t use a jet plane to go and get the mail, right? You use it to go somewhere far away. So a very powerful computer actually doesn’t have the limitation and so the algebra formula that I came up with showed that there is no real limit to parallel processing if you can scale the data of the problem to match the amount of power available.”

Gustafson is the author of The End of Error: Unum Computing, which explains a new approach to computer arithmetic: the universal number (unum). The unum encompasses all IEEE floating-point formats as well as fixed-point and exact integer arithmetic. This new number type obtains more accurate answers than floating-point arithmetic yet uses fewer bits in many cases, saving memory, bandwidth, energy, and power.”

In this slidecast, John Gustafson presents: An Energy Efficient and Massively Parallel Approach to Valid Numerics.

Download the MP3 * Sign up for our insideHPC Newsletter

Comments

  1. Brent Gorda says

    Nice interview John – I admire your ability to explain HPC parallel computing in terms she could understand.