All NVIDIA, all the time: what's up?

Print Friendly, PDF & Email

I’ve had a couple friendly (and some pointed) jabs asking whether we’ve become an NVIDIA subsidiary in the past two days, and with the flood of news related to that company, you might well be wondering what the heck is going on (particularly if you are an email reader, not a site reader).

NVIDIA is having their GPU Technology Conference this week, and they have had a flood of news of great interest to the HPC and scientific computing community in the past 48 hours. We are doing our best to weed it down to only the bits that matter to the largest audience, but the fact is that much of the news is pretty high quality (512 cores and 1 TB of RAM in NVIDIA’s next gen GPUs? Yeah, you need to know about that).

So rest easy! We are still your friendly locally-owned HPC news site, and now that NVIDIA have a bunch of their announcements out on the street, I’m expecting things to return to a more normal mix. For those of you put off by the sudden change, my apologies. We aim to report on whoever is making the news, and since yesterday that’s been NVIDIA.

Comments

  1. I agree that this is interesting news, and I’m happy to see it here.. my only complaint (and, really, I shouldn’t use that word) is that there’s not more information on ORNL’s planned 20 PF system. And I don’t mean details on the system architecture, but rather on the procurement / research process… the Fermi chips, being equipped with essentially 256 double-precision cores, are obviously quite nice. But did ORNL get access to simulators? Early designs? Did they run any of their applications? Etc.

    See, the trouble with GPUs in the past has been the effort to use them – and unless you could get by with single-precision FP, that effort was seldom worth it since it often involved re-writing things (as opposed to, say, simply recompiling). Now the new chips offer a substantial bump to double-precision FP, so is ORNL planning on investing lots of effort in rewriting applications, or are they expecting the toolset to be substantially better, more or less allowing recompiles to take advantage of the hardware? Or are they targetting the single-precision crowd? I’d love to know more about that. What happened behind the scenes. I’m going to bug people at SC09, but I’d love to hear more if you can find out!

    (My place of work has yet to come around to the idea of investing even tiny amounts of their considerable funds into GPUs or other accelerators, but if I can say, “Hey, look – ORNL is doing this, that and whatever!”, it helps make the case to shuffle some resources towards this. And maybe, just maybe, we can climb a little closer to the fore-front of HPC technologies.)

  2. Brian – I hope to be able to dig into those details as well. I’ve sent emails…just waiting on a response. Hopefully I’ll have helpful information to share.