In this installment of Andrew’s Corner [courtesy of his monthly article at ZDNet/UK], our buddy Andrew Jones discusses the future potential of GPUs [and other accelerated computing paradigms] in HPC. He takes an interesting approach that I have yet to stumble upon. Andrew makes his point based upon the parallels between the RISC-to-x86 migration of the late 1990’s to the recent uprising of GPUs in HPC.
…both AMD with x86-64 and Nvidia with general-purpose GPU computing (GPGPU), the provision of a software ecosystem of compilers, ACML, Cuda and community sites was critical to create momentum behind the technology. Some might even say that the success was because of the software and community rather than because of any advantage of the hardware over other similar solutions.
This is reasonably similar to Intel’s push with the IA-64 EPIC architecture. Technically speaking, it was more advanced than the x86 processors at the time [and some argue that its still more advanced]. However, the x86 had the momentum of the software gurus around the world. Compilers, libraries, SDK’s and whole applications were more prevalent. This, over potential performance advantages, made the less expensive technology more appealing.
The CPU vs GPU debate sounds similar to me so far. It seems that price won over competing arguments in the past. For example, another processor trying to take on the RISC dominance at the time was Intel’s IA64 Itanium. It offered something potentially better but at a price premium. x86-64 offered good enough, but cheaper.
So what gives? Now we have a higher performing device at a lower entry price in GPUs. What will this do to the market in general? What affect will it have on large ISV-type codes?
If we had to develop a major application now, for a longish use-life, we’d have to make a gamble between OpenMP or Cuda or OpenCL or the various products that hope to bridge the gap. Until that is fixed and GPU is generic enough to mean it doesn’t matter at develop-time whose product will be used at run-time, the investment of effort to get the performance and cost rewards is a hard call.
As always, Andrew has a very convincing argument. So far, the GPU railroad has stopped at two of the three stations required to become a long-term item in the HPC landscape: price and performance. The third station on the horizon is standardization.
If you’re interested in reading more, check out Andrew’s full article here.