Linux Magazine has an article written by Dan Tuchler detailing why he thinks 10-gigabit Ethernet should be a more widely considered for vanilla HPC cluster installations. Considering the vast majority of of cluster installations fall outside of the realm of the Top500 list, many of us tend to forget that the average HPC user doesn’t have terabits of interconnect bandwidth. They’re simply using gigabit ethernet. Tuchler argues that this high comfort level with Ethernet technologies, coupled with the sinking costs of 10GbE make the technology ripe for an interconnect platform.
As a widely-used standard, Ethernet is a known environment for IT executives, network administrators, server vendors, and managed service providers around the world. They have the tools to manage it and the knowledge to maintain it. Broad vendor support is also a plus – almost all vendors support Ethernet.
I somewhat agree with Tuchler’s point of view. Five years ago 10GbE prices were so far out in the stratosphere that rarely would you ever have the funds to purchase a switch. The prices *are* finally coming down to reasonable levels. However, so are the prices of other common cluster interconnects such as Myrinet and Infiniband. Tuchler quotes $500 per port on 10GbE which is very close to the current Infiniband cost basis. So why go 10GbE when you can buy Infiniband with native RDMA capabilities and an integrated IP stack? [this is really a question folks, I'm not being sarcastic].
Feel free to leave your comments on this one. I’m interested to hear what the audience feels about this debate. For more info, read Dan’s article here.