You can almost hear the evil laugh in this interview with NVIDIA’s boss Jen-Hsun Huang for Forbes. It’s worth a skim, but a few things jumped out at me. First, this bit
Forbes: Right. But that’s three and fore cores. That’s not 60 or more cores.
Huang: As it turns out, redesigning your algorithm to work on more than one core, you might as well redesign it to work on [many cores]. If you’re going to re-factor or redesign your algorithm anyways, you might as well redesign your algorithm completely. My point is that we came at a perfect time, you know, that the CPU kind of hit the wall and everybody’s looking at multi-core. But multi-core, the results didn’t live up to the promise. And here we are with our technology called Cuda [a C-based architecture for coding in GPU] and GPU computing. And all of a sudden the speed-up is 50 times, 100 times, 200 times. And people are just astonished by the speed-ups. When was the last time that anything sped up a computer application that anyone used 50 times?
I understand the spirit of his first remarks here, but I don’t think he does any justice to the difficulty programmers will face in retooling their code for multiple cores (from anyone, not just NVIDIA). Also, while it’s true that a code built to run on 60 cores will probably do well on 4, the reverse is not true, and asking programmers to go from 0-way to an effective 60-way parallelization in one step is a tall order.
Then this bit
Forbes: How long before this is common in the workplace?
By the end of next year, you’re going to see GPU computing in the vast majority of the world’s personal computers.
Every single workstation by then will have GPU computing. I think that most clusters, most high-performance clusters around the world will have GPU computing inside. And because Microsoft included it in DX compute for Windows 7 and because Apple’s going to include it for Snow Leopard, you’re going to find GPU computing available to the mass market almost instantaneously overnight. This is a very, very important event.