CTR's 2009 top 5 trends in HPC

Print Friendly, PDF & Email

Computer Technology Review published its list of the top 5 trends it expects in HPC throughout 2009. Here is the (abbreviated) list

  1. HPC is becoming more mainstream.
  2. Green IT initiatives are getting real.
  3. Cloud computing/software as a service/infrastructure as a service are becoming concrete.
  4. Traditional data centers are losing favor.
  5. A new era in storage is dawning.

A reasonable enough list, but hey, everyone is a critic, and I would have added GPUs. It looks to me like the general accelerator rush is coming to an end with the demise of ClearSpeed and the reminder that FPGAs are still really hard to get performance out of. But GPUs look like they are going to run in 2009 — although how far the run beyond 2009 depends upon how well they fare when (or if) manycore processors finally come to market.

The storage part of that article — about content addressable storage — is worth a quick read.


  1. […] West at InsideHPC.com links to an article I read last week and didn’t comment on. In this article David Driggers, CTO at Verari, points […]


  1. Hi John:

    I disagree with the comment “the general accelerator rush is coming to an end”. On the contrary, speaking with customers, it seems to be picking up. Clearspeed had issues as a company and as a product, but they are/were not the only accelerator provider.

    The argument for accelerators is compelling for the smaller users in terms of leveraging power to their applications. It is compelling for the software ISV’s looking to help their customers decrease the cost of their hardware in order to have more budget for the ISV’s software.

    Accelerators offer some combination of many more processor cycles per wallclock tick, or more efficient cycles per wallclock tick. As we demonstrated recently with GPU-HMMer, a single machine with 3 GPUs can outperform a more power-hungry/harder to maintain cluster on the same code. That is, there is a financial, ease of use, and believe it or not, a green argument to be made for using accelerators. As you note in the subsequent article, the memory wall is a problem, and curiously enough the GPUs suffer from their own version of it, though it is ~10x further out than the memory wall on the host (what we call computing substrate).

    Clearspeed failed due to them having a business model that required ~$5k/accelerator for ~10x wallclock on ordinary applications, where your code needed to be ported, and Clearspeed accelerators were not ubiquitous. Programmable GPUs are in 1E+7 devices, cost from $150-$1800/unit, giving 5-30x per application (not per kernel). Couple this ubiquity with the low cost (I can/do develop Cuda code on my laptop) of the platform and the zero cost of the tools, and you have something of interest for people in need of ever more computing capability with an ever decreasing budget for the same.

    We might have to agree to disagree, but accelerators appear to have a very bright, and long future ahead of them in HPC.


  2. Joe – I agree entirely about GPUs (for now at least). My “general accelerator” comment was meant to be limited to the GPU parent class, including Clearspeed and FPGAs and whatnot. I was trying to convey with my next sentence “But GPUs look like they are going to run in 2009…” that I think GPUs are the specific exception to the general rule…but I think I failed to be clear!

  3. Whoops, my bad. I had mis-understood what you meant. I think we are in close agreement.

    Must …. not …. post …. before …. coffee ….