Traditionally, computing devices are connected together in a bus, the most popular standard of which will soon be PCI Express. A bus-based architecture tends to have poor latency, though for most applications this is acceptable. Sometimes, however, when the application has more processes than data, numerous small messages tend to be a major factor in performance, and thus latency becomes critical. For these applications, PCI Express may be a weight.
Indeed, one of the original goals of InfiniBand was to replace PCI with a switched fabric. Wide-spread adoption for this purpose is unlikely to happen, and IBA has instead become a cluster interconnect.
So, another standard is HyperTransport, in which devices are connected directly to the processor, such as processor-to-processor in SMPs. The connector for HyperTransport (called HTX) can be used for expansion slots; for example, PathScale (soon to be QLogic) offers an InfiniBand card that plugs into HTX.
HyperTransport and PCI Express can coexist. Motherboard manufactures usually produce both on the same system, thereby allowing the user to choose how to build his system according to his needs.
So, is HTX the next big thing in cluster interconnects? This remains to be seen. For one thing, it’s only available with AMD chips at the moment (though to be fair, AMD does have a sizable chunk of the HPC market). Another issue is that PathScale has the only major peripheral device for HTX, and will be releasing a PCI Express version in the coming months. It is true that Cray uses HyperTransport in their Opteron-based systems, but this isn’t commodity. Finally, Sun, a founding member of the HyperTransport Consortium, has recently refused to add HTX to their product line. Not a vote of confidence, for sure.
One other item of note is that Intel has announced plans to create a “Common System Interface” to compete with HyperTransport. CSI will take a few years to materialize.