Hot Chips wrap up

Print Friendly, PDF & Email

Michael over at HPCwire has a nice blog post recapping the Hot Chips conference that ended last week

The star of the show this year was the upcoming Power7 processor being developed by IBM. This is the chip that will be powering NCSA’s “Blue Water” supercomputer, a 10-petaflop machine slated for deployment in 2011. Blue Waters will be the first production deployment of IBM PERCS, a set of technologies that is being partially funded by DARPA under its High Productivity Computing Systems (HPCS) program. For more pedestrian uses, the Power7 will also be used in IBM’s Power 5xx server line. The new chips should show up in IBM gear sometime in 2010.

Michael does end with a little note on GPUs that I wanted to comment on

So with cores multiplying like rabbits on CPUs, what do we need GPUs for? At this year’s Hot Chips event, the GPU contingent was relatively silent. No new graphics processors were presented, although NVIDIA CEO Jen-Hsun Huang did manage to rain on the CPU parade a little bit by drawing attention to the disparity between performance gains between the two major processor architectures. Huang predicted that over the next six years GPU compute power will increase by a factor of 570, while CPU architectures will only increase by a power of three. That seems like an awfully optimistic scenario for GPUs, and a rather pessimistic one for CPUs. In fact, in six years there may not even be a strict delineation between GPUs and CPUs. I guess we’ll just have to wait for Hot Chips 2015 to see if anything we thought in 2009 was even remotely accurate.

GPUs, having been created for graphics pipelines, are difficult to program for computation. But tools are being developed that bridge this gap, including IDEs, compilers, and libraries. This difficulty notwithstanding, the technology is usable even today, and the economics of mass market adoption combined with the GPU network effect (the GPU becomes more valuable as more people use it) combine to make the kind of argument that is irresistible to HPC people who will always buy cheap as long as it works so they can get more.

Of course Michael’s question isn’t about whether people will want to use them, its about whether there will be any need for them at all as CPUs keep piling on cores and, perhaps, go to a heterogeneous mix of cores. If cores double every 18 months as has been predicted, then in 6 years that’s a roughly 16x increase (a little more) in cores. So CPUs will have in the neighborhood of 100 cores. Some of those CPU cores could well be basic graphics cores to handle the kind of graphics that Word and Excel need, but the GPU market today is not driven by these requirements. There isn’t any reason to suppose that GPUs won’t do at least that well, which will put them in the neighborhood of 4,000 cores. The users driving the commodity graphics market are still going to want more than a CPU would like be able to handle. Of course, many people have gone broke betting that markets will not continue to evolve, and I’m not saying that. What I think I am saying is that right now it doesn’t appear likely that the CPU will subsume the GPU in the next 6 years. Are they likely to be much more closely integrated? You bet. But the competition for area on a CPU die between computation and memory requirements doesn’t leave a lot of room for the kinds of huge silicon areas that GPUs need.

Comments

  1. John Leidel says

    As an addition to John West’s comments on the future adoption of GPUs, we are ignoring a critical piece of the proverbial pie. How does one feed a processor that is 570X faster than today’s processors? Current bus technologies are already limiting the amount of data one can pump to and from a GPU. If you examine the speed at which our bus technologies evolve [http://en.wikipedia.org/wiki/Computer_bus], its far slower than the core processor technologies.

    Personally, I believe GPUs are very applicable for certain applications. However, I also believe they are simply a stepping stone to the next generation of computational platforms. I believe we’re beginning to witness the initial convergence of traditional scalar and massively multi-threaded [vector] core processor techniques. Alongside this, I also believe we’ll bear witness to a similar shift in the parallelization methodologies.