Last week HPCwire ran this press release from the Hypertransport Group, released during their annual pow wow. Pretty standard stuff, except for the way this one reads
Believing that the economic downturn makes computing technology key to reducing cost and increasing operational efficiency, the HyperTransport Technology Consortium today stated that it sees ongoing demand for optimized high-performance computing (HPC) infrastructure capable of supporting job allocation, data handling and peak power flexibility. Highlighting hardware and software virtualization and consolidation as vital enablers of cloud computing, Consortium members discussed the applications and technologies that will be central to the high-performance computing market in the coming years at the International HyperTransport Symposium and Workshop 2009 held last week.
These two sentences capture the spirit of the entire release, which confusingly and incorrectly, conflates everything that anyone is doing with multiple servers crowded into a room with HPC. That’s pretty much my only point. It may well be that we can make use of resources deployed in a cloud configuration for HPC (I’m hopeful this will be the case), and vice versa. But even so, for right now at least, the two workloads may run on similar hardware but they have different software stacks and different basic characteristics. They shouldn’t be blended together…yet.