InfiniBand Charts Course to Exascale

Print Friendly, PDF & Email

In this story that originally appeared in The Exascale Report, Lloyd Dickman takes a closer look at how InfiniBand is advancing on the road to Exascale.

The next order of magnitude increase in system performance from 10 PetaFLOPS to 100 PetaFLOPS will require additional evolution of the InfiniBand standards to permit hundreds of thousands of nodes to be addressed. The InfiniBand industry is already initiating discussions as to what evolved capabilities are needed for systems of such scale. As in the prior step up to more performance, required link bandwidths can be achieved by 12x EDR (which is currently being defined) or perhaps 4x HDR (which has been identified on the InfiniBand industry roadmap). Systems of such scale may also exploit topologies such as mesh/torus or hypercube, for which there are already large scale InfiniBand deployments.

Update: Jeff Squyres has posted a thought-provoking response to this post over at the Cisco HPC blog.

Trackbacks

  1. […] This post was mentioned on Twitter by insideHPC, 亀田謙. 亀田謙 said: RT @insideHPC: Just in: InfiniBand Charts Course to Exascale http://bit.ly/gNTV1r […]

  2. Cisco Blog says:

    Exascale: it’s not just the (networking) hardware…

    Many in the HPC research community are starting to work on “exascale” these days — the ability to do 10^18 floating point operations per second.  Exascale is such a difficult problem that it will require new technologies in many diffe…