Search Results for: infiniband

Mellanox Rocks the TOP500 with Ethernet and InfiniBand

Today Mellanox announced that the company’s InfiniBand solutions accelerate six of the top ten HPC and AI Supercomputers on the June TOP500 list. The six systems Mellanox accelerates include the top three, and four of the top five: The fastest supercomputer in the world at Oak Ridge National Laboratory, #2 at Lawrence Livermore National Laboratory, #3 at Wuxi Supercomputing Center in China, #5 at Texas Advanced Computing Center, #8 at Japan’s Advanced Industrial Science and Technology, and #10 at Lawrence Livermore National Laboratory. “HDR 200G InfiniBand, the fastest and most advanced interconnect technology, makes its debut on the list, accelerating four supercomputers worldwide, including the fifth top-ranked supercomputer in the world located at the Texas Advanced Computing Center, which also represents the fastest supercomputer built in 2019.”

InfiniBand: To HDR and Beyond

Ariel Almog from Mellanox gave this talk at the OpenFabrics Workshop in Austin. “Recently, deployment of 50 Gbps per lane (HDR) speed started and 100 Gbps per lane (EDR) which is a future technology is around the corner. The high bandwidth might cause the NIC PCIe interface to become a bottle neck as PCIe gen3 can handle up to single 100 Gbps interface over 16 lanes and PCIe gen4 can handle up to single 200 Gbps interface over 16 lanes. In addition, since the host might have dual CPU sockets, Socket direct technology, provides direct PCIe access to dual CPU sockets, eliminates the need for network traffic to go over the inter-process bus and allows better utilization of PCIe, thus optimizing overall system performance.”

Mellanox HDR 200G InfiniBand Speeds Machine Learning with NVIDIA

Today Mellanox announced that its HDR 200G InfiniBand with the “Scalable Hierarchical Aggregation and Reduction Protocol” (SHARP) technology has set new performance records, doubling deep learning operations performance. The combination of Mellanox In-Network Computing SHARP with NVIDIA 100 Tensor Core GPU technology and Collective Communications Library (NCCL) deliver leading efficiency and scalability to deep learning and artificial intelligence applications.

Video: Why InfiniBand is the Way Forward for Ai and Exascale

In this video, Gilad Shainer from the InfiniBand Trade Association describes how InfiniBand offers the optimal interconnect technology for Ai, HPC, and Exascale. “Tthrough Ai, you need the biggest pipes in order to move those giant amount of data in order to create those Ai software algorithms. That’s one thing. Latency is important because you need to drive things faster. RDMA is one of the key technology that enables to increase the efficiency of moving data, reducing CPU overhead. And by the way, now, there’s all of the Ai frameworks that exist out there, supports RDMA as a default element within the framework itself.”

Now Shipping: Mellanox HDR 200G InfiniBand Solutions for Accelerating HPC & Ai

Today Mellanox announced that its 200 Gigabit HDR InfiniBand solutions are now shipping worldwide to deliver leading efficiency and scalability to HPC, Ai, cloud, storage and other data intensive applications. “The world-wide strategic race to Exascale supercomputing, the exponential growth in data we collect and need to analyze, and the new performance levels needed to support new scientific investigations and innovative product designs, all require the fastest and most advanced HDR InfiniBand interconnect technology. HDR InfiniBand solutions enable breakthrough performance levels and deliver the highest return on investment, enabling the next generation of the world’s leading supercomputers, hyperscale, Artificial Intelligence, cloud and enterprise datacenters.”

200 Gigabit HDR InfiniBand to Power New Atos Supercomputers at CSC in Finland

Last week in Finland, Mellanox announced that its 200 Gigabit HDR InfiniBand solutions were selected to accelerate a multi-phase supercomputer system at CSC – the Finnish IT Center for Science. The new supercomputers, set to be deployed in 2019 and 2020 by Atos, will serve the Finnish researchers in universities and research institutes, enhancing climate, renewable energy, astrophysics, nanomaterials and bioscience, among a wide range of exploration activities. The Finnish Meteorological Institute (FMI) will have their own separate partition for diverse simulation tasks ranging from ocean fluxes to atmospheric modeling and space physics.

Mellanox Powers New Hawk Supercomputer at HLRS with 200 Gigabit HDR InfiniBand

Today Mellanox announced that its 200 Gigabit HDR InfiniBand solutions were selected to accelerate a world-leading supercomputer at HLRS in Germany. The 5000-node supercomputer named “Hawk” will be built in 2019 and provide 24 petaFLOPs of compute performance. By utilizing the InfiniBand fast data throughput and the smart In-Network Computing acceleration engines, HLRS users will be able to achieve the highest HPC and AI application performance, scalability and efficiency.

Why InfiniBand rules the roost in the TOP500

In this special guest feature, Bill Lee from the InfiniBand Trade Association writes that the new TOP500 list has a lot to say about how interconnects matter for the world’s most powerful supercomputers. “Once again, the List highlights that InfiniBand is the top choice for the most powerful and advanced supercomputers in the world, including the reigning #1 system – Oak Ridge National Laboratory’s (ORNL) Summit. The TOP500 List results not only report that InfiniBand accelerates the top three supercomputers in the world but is also the most used high-speed interconnect for the TOP500 systems.”

Video: Mellanox HDR InfiniBand Speeds HPC and Ai Applications with SHARP Technologies

In this video, Gilad Shainer from Mellanox describes how the company’s newly available HDR 200 Gigabit/sec InfiniBand solutions can speed up HPC and Ai applications. “We are proud to see the first Mellanox HDR InfiniBand ConnectX-6 adapters and Quantum switches based supercomputer in the world,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “The smart In-Network Computing acceleration engines that InfiniBand enables will deliver the highest performance, efficiency and scalability for the University of Michigan users, for both HPC and AI applications.”

Bitfusion Enables InfiniBand-Attached GPUs on Any VM

“With Bitfusion along with Mellanox and VMWare, IT can now offer an ability to mix bare metal and virtual machine environments, such that GPUs in any configuration can be attached to any virtual machine in the organization, enabling easy access of GPUs to everyone in the organization,” said Subbu Rama, co-founder and chief product officer, Bitfusion. “IT can now pool together resources and offer an elastic GPU as a service to their organizations.”