Sign up for our newsletter and get the latest HPC news and analysis.

NVIDIA's analyst day

insideHPC’s pal Andy sent along an email pointer to this article at c|net, where author  Peter Glaskowsky outlines his experiences at NVIDIA’s latest show and tell for people who think and write about tech, Analyst Day.

Glaskowsky starts off with a good sign, CEO Huang acknowledging a strategic error and then outlining a plan to fix it

nVidia logoNvidia has had a rough couple of quarters in the market, which CEO Jen-Hsun Huang blamed in part on a bad strategic call in early 2008: to place orders for large quantities of new chips to be delivered later in the year. When the recession hit, these orders turned into about six months of inventory, much of which simply couldn’t be sold at the usual markup.
In response, Nvidia CFO David White outlined measures the company plans to take to increase revenue, sell a more valuable mix of products, reduce the cost of goods sold, and cut back on Nvidia’s operating expenses.

Glaskowsky then talks a little about NVIDIA’s near term move to a 40nm process with chip fabber TSMC before he moves on to his concept of a digital divide: an era in which computers with GPUs have a crushing advantage over those without in response to an increasing move by software vendors to retool their codes to take advantage of the added processing power

It seems to me that such dramatic performance differences create a new (and less socially significant) kind of “digital divide.” As more applications learn to take advantage of GPU co-processing, the practical advantages of GPU-equipped systems will eventually become overwhelming.

This is a provocative statement. It makes sense from a cost of hardware perspective — it can cost under $50 for a low end add on card and probably much less for OEMs to get just the chip to solder onto the MB — so there isn’t a production barrier to including them on just about everything bigger than a Palm Pre. Given the state of programming for GPUs today, however, I don’t see ISVs flocking en masse toward adoption. We are starting to see some signficant higher level tools and abstractions, however, and if those gain a solid foothold they could facilitate a broader adoption. All of this could impact HPC if those creating the new tools and abstractions go the extra mile and allow us to focus just on parallelism, without having to retool or recompile depending upon whether the target is CPUs or GPUs. I think this is the kind of barrier crashing that needs to be done in order for GPU systems to make significant inroads into traditional scientific computing.

Resource Links: