GPUs and HPC

Print Friendly, PDF & Email

via HPC Anwers

Chris at HPC Answers has an interesting post addressing the question “Are GPUs the next wave in HPC?

As much as I’d prefer to see someone like ClearSpeed succeed over GPU-based general-purpose computing, I’ve seen enough in this industry to understand commoditization, volume, and market penetration. I believe a more likely scenario is that CPU + GPU will indeed become standard in blade-based clusters aimed at technical computing applications.

I agree, on both counts, unless the FPGA guys can move us away from having to code algorithms in hardware description languages and performance is a lot better (because CPU + GPU will probably be cheaper). Advances like SystemC may address the first point; the jury is still out on the second.

Given their energy advantages (10W “average” for the ClearSpeed co-processor versus over 100-200W for some high end GPUs), I wouldn’t discount ClearSpeed just yet, though.


  1. […] Since this was announced at SC’06 it isn’t exactly news, but it is interesting and dovetails with the GPU discussion we had earlier. […]


  1. I suppose the Cell falls into the same realm of possibility. Given all the problems lately with bus contention and memory bandwidth between multicore systems competing for resources, I too believe that some form of “coprocessor” system is going to come to the forefront. Whether it’s GPU’s, ClearSpeed, or Cell is still to be determined, but of the three only the GPU’s have the quantity to market to make it a reality.

  2. Its all about power. Blade centres have reached their power limits and have found expansion may not be possible as it once was. This issue has somehow slipped the public consciousness, but there is legislation in the works to limit the exorbitant use of power for these systems. The trend is inescapable and you may have noticed that Intel/AMD literature now is starting to mention performance/watt. This is absent from GPU literature.

    For equivalent GPU vs Clearspeed performance, GPU power is an order of magnitude higher. This is an architectural issue, the GPUs nightmare giga-threading programming model (good for pixels) will just burn those watts. Cell just hasn’t addressed this power issue either. GPU and cell boards burn 150-200 watts each, compare this to the 25 watts of the advance boards. Installing 8 Advance boards for the equivalent power budget of either ONE GPU or ONE Cell, that’s near an order of magnitude more potential performance for the same power budget.

    GPU+CPU hybrids will be really good for low cost PCs and handhelds where integration costs really do the trick.