“The CMS detector at the Large Hadron Collider at CERN underwent a replacement of its data acquisition network to be able to process the increased data rate expected in the coming years. We will present the architecture of the system and discuss the design of its layers which are based on Infiniband as well as 10 and 40 GBit/s Ethernet.”
“Adaptive Routing has been added to the static routing capability available in previous switch families. InfiniBand supports moving traffic via multiple parallel paths. Adaptive routing dynamically and automatically re-routes traffic to alleviate congested ports. In networks where traffic patterns are more predictable, static routing has been shown to produce superior results. The InfiniScale IV architecture provides the best of both static and adaptive routing.”
CORAL (Collaboration of Oak Ridge, Argonne and Lawrence Livermore National Labs) is a project that was launched in 2013 to develop the technology and meet the Department of Energy’s 2017-2018 leadership computing needs with supercomputers. The collaboration between Mellanox, IBM and NVIDIA was selected by the CORAL project team after a comprehensive evaluation of future technologies from a variety of vendors. The development of these supercomputers is well underway with installation expected in 2017.
“Facebook had the forethought to create the Open Compute Foundation and share IP from designing a highly efficient computing infrastructure at an extremely low cost. We are now building on that collaborative development model to bring expanded flexibility with regard to form factors, processors and configurations for a broad range of customer requirements.”
In this slidecast, Bill Lee and Rupert Dance from the InfiniBand Trade Association describe the new IBTA Volume 1 Specification Release. “The new release defines new capabilities that will enable computer systems to keep up with the requirements for increased scalability and bandwidth, along with high computing efficiency and high availability for both high performance computing and commercial enterprise data centers.”
In this video from SC14, Eyal Waldman from Mellanox announces that the company can now deliver end-to-end 100 Gigabit/sec InfiniBand. “The demand for more computing power, efficiency and scalability is constantly accelerating in the HPC, enterprise, cloud computing and Web 2.0 markets. To address these demands Mellanox provides complete end-to-end solutions (silicon, adapter cards, switch systems, cables and software) supporting InfiniBand and Ethernet networking technologies.”