Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Full Roundup: SC19 Booth Tour Videos from insideHPC

Now that SC19 is behind us, it’s time to gather our booth tour videos in one place. Throughout the course of the show, insideHPC talked to dozens of HPC innovators showcasing the very latest in hardware, software, and cooling technologies.

Mellanox Rocks the TOP500 with HDR InfiniBand at SC19

In this video, Gilad Shainer from Mellanox describes the company’s dominating  results on the TOP500 list of the world’s fastest supercomputers. “At SC19, Mellanox announced that 200 gigabit per second HDR InfiniBand accelerates 31% of the new 2019 InfiniBand systems on November’s TOP500 supercomputing list, demonstrating market demand for faster data speeds and smart interconnect technologies. Moreover, HDR InfiniBand connects the fastest TOP500 supercomputer built in 2019.”

Video: NVIDIA Magnum IO Moves Big Data Faster than Previously Possible

Today NVIDIA introduced NVIDIA Magnum IO, a suite of software to help data scientists and AI and high performance computing researchers process massive amounts of data in minutes, rather than hours. “Optimized to eliminate storage and input/output bottlenecks, Magnum IO delivers up to 20x faster data processing for multi-server, multi-GPU computing nodes when working with massive datasets to carry out complex financial analysis, climate modeling and other HPC workloads.”

Mellanox LongReach Appliance Extends InfiniBand Connectivity up to 40 Kilometers

Today Mellanox introduced the Mellanox Quantum LongReach series of long-distance InfiniBand switches. Mellanox Quantum LongReach systems provide the ability to seamlessly connect remote InfiniBand data centers together, or to provide high-speed and full RDMA (remote direct memory access) connectivity between remote compute and storage infrastructures. Based on the 200 gigabit HDR Mellanox Quantum InfiniBand switch, the LongReach solution provides up to two long-reach InfiniBand ports and eight local InfiniBand ports. The long reach ports can deliver up to 100Gb/s data throughput for distances of 10 and 40 kilometers.

Mellanox Announces HDR InfiniBand-to-Ethernet Gateway Appliance for High Performance Data Centers

Today Mellanox introduced Mellanox Skyway, a 200 gigabit HDR InfiniBand to Ethernet gateway appliance. Mellanox Skyway enables a scalable and efficient way to connect the high-performance, low-latency InfiniBand data center to external Ethernet infrastructures or connectivity. Mellanox Skyway is the next generation of the existing 56 gigabit FDR InfiniBand to 40 gigabit Ethernet gateway system, deployed in multiple data centers around the world.

GPU-Powered Turbocharger coming to JUWELS Supercomputer at Jülich

The Jülich Supercomputing Centre is adding a high-powered booster module to their JUWELS supercomputer. Designed in cooperation with Atos, ParTec, Mellanox, and NVIDIA, the booster module is equipped with several thousand GPUs designed for extreme computing power and artificial intelligence tasks. “With the launch of the booster in 2020, the computing power of JUWELS will be increased from currently 12 to over 70 petaflops.”

Dell EMC to Deploy World’s Largest Industrial Supercomputer at Eni

Today Eni announced plans to deploy the world’s largest industrial supercomputer at its Green Data Center in Italy. Called “HPC5,” the new system from Dell EMC will triple the computing power of their existing HPC4 system. The combined machines will have a total peak power of 70 Petaflops. “HPC5 will be made up of 1,820 Dell EMC PowerEdge C4140 servers, each with two Intel Gold 6252 24-core processors and four NVIDIA V100 GPU accelerators. The servers will be connected through an InfiniBand Mellanox HDR ultra-high-performance network.”

Video: InfiniBand In-Network Computing Technology and Roadmap

Rich Graham from Mellanox gave this talk at the UK HPC Conference. “In-Network Computing transforms the data center interconnect to become a “distributed CPU”, and “distributed memory”, enables to overcome performance barriers and to enable faster and more scalable data analysis. HDR 200G InfiniBand In-Network Computing technology includes several elements – Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), smart Tag Matching and rendezvoused protocol, and more. This session will discuss the InfiniBand In-Network Computing technology and performance results, as well as view to future roadmap.”

Mellanox Accelerates NVMe/TCP and RoCE Fabrics to 200Gb/s

Today Mellanox announced acceleration of NVMe/TCP at speeds up to 200Gb/s. The entire portfolio of shipping ConnectX adapters supports NVMe-oF over both TCP and RoCE, and the newly-introduced ConnectX-6 Dx and BlueField-2 products also secure NVMe-oF connections over IPsec and TLS using hardware-accelerated encryption and decryption. These Mellanox solutions empower cloud, telco and enterprise data […]

Report: Mellanox ConnectX Ethernet NICs Outperforming Competition

Today Mellanox announced that laboratory tests by The Tolly Group prove its ConnectX 25GE Ethernet adapter significantly outperforms the Broadcom NetXtreme E series adapter in terms of performance, scalability and efficiency. “Our testing shows that with RoCE, storage traffic, and DPDK, the Mellanox NIC outperformed the Broadcom NIC in throughput and efficient CPU utilization. ConnectX-5 also used ‘Zero-Touch RoCE’ to deliver high throughput even with partial and no congestion control, two scenarios where Broadcom declined to be tested.”