“We chose Mellanox’s end-to-end FDR InfiniBand interconnects to connect the system, specifically by using their Connect-IB adapters in a dual rail network, as well as utilize the NVIDA GPUDirect RDMA communication acceleration to significantly increase the systems parallel efficiency.”
Search Results for: mellanox
Today Mellanox announced record applications performance for its Connect-IB FDR 56Gb/s InfiniBand adapters. Benchmarks performed on multiple applications, such as WIEN2k, a quantum mechanical simulation software, and WRF, a Weather Research and Forecasting simulation software, demonstrated higher performance of up to 200 percent with forty compute nodes compared to competition’s QDR 40Gb/s InfiniBand solutions.
Today Mellanox launched its Online Academy and certification platform for data center IT professionals seeking to further enhance and manage InfiniBand- and Ethernet-based networks.
Today Mellanox announced that it is the first certified End-To-End interconnect vendor for OpenStack. Leveraging Mellanox 10/40GbE or FDR 56Gb/s adapters and switches and the OpenStack Cinder block storage and Neutron plug-ins, cloud vendors can significantly improve storage access performance and run virtual machine traffic with bare-metal performance, while enjoying hardened security and QoS; all delivered in a simple and tightly integrated package.
MetroX is the perfect cost-effective, low power, easily managed solution that enables today’s data centers to run over local and distributed RDMA InfiniBand and Ethernet fabrics, with management under a single unified network infrastructure,” said Gilad Shainer, vice president of marketing at Mellanox.
Mellanox is seeking a Director of Channel Marketing in our Job of the Week.
By opening VMA source code we enable our customers with the freedom to implement the acceleration product and more easily tailor it to their specific application needs,” said Tom Thirer, director of product management at Mellanox Technologies. “We encourage our customers to use the free and open VMA source package and to contribute back to the community.
Today Mellanox announced that their FDR InfiniBand-powered RDMA Solution Enables 2X Faster Access to File Storage.
Over at the Mellanox Blog, Scot Schultz writes that Mellanox has released “Deploying HPC Clusters with Mellanox InfiniBand Interconnect Solutions,” a guide on how to design, build, and test an HPC cluster. High-performance simulations require the most efficient compute platforms. The execution time of a given simulation depends upon many factors, such as the number […]