Voltaire announced today that they have been working with the folks at the Tokyo Institute of Technology in order to deliver a 40Gbps Infiniband solution on the upcoming TSUBAME 2.0 machine. The 2.4 Petaflop machine will have more than 1,400 compute nodes incorporating the Voltaire QDR HCAs and 12 Grid Director 4700 switches.
TSUBAME 2.0 will continue to push science forward by providing world-class supercomputer facilities that enable research and development to be completed and utilized more quickly than ever before,” said Professor Satoshi Matsuoka of Global Scientific Information and Computing Center (GSIC), Tokyo Institute of Technology. “We designed the system with the latest Intel Westmere-EP and Nehalem-EX CPUs coupled with more than 4,200 of NVIDIA Tesla 20-series GPUs to provide extreme processing capabilities on each node. To capitalize on this new level of compute power, we implemented a dual-rail, non-blocking fabric that can support throughput up to 80 Gb/s per node, employing two Voltaire 40 Gb/s InfiniBand connections on each node.”
The Tokyo Institute of Technology’s requirements in its design of TSUBAME 2.0 call for extreme levels of high-performance, high scalability, and low-latency in the fabric,” said Asaf Somekh, vice president of marketing, Voltaire. “Voltaire’s 40 Gb/s InfiniBand solutions deliver top performance, and as a result are used within the world’s leading supercomputers. We are pleased to be working with our long-time OEM partner NEC on this project.”
Tech details from their release include: The new TSUBAME 2.0 supercomputer system, which will have more than 1,400 compute nodes, will incorporate Voltaire’s QDR InfiniBand fabric in a fully non-blocking configuration, with 12 Grid Director 4700 40 Gb/s InfiniBand switches, 179 Grid Director 4036 edge switches and 6 Grid Director 4036E switches for high performance bridging to 10 GbE storage.
For more info, read their full release here.