Sign up for our newsletter and get the latest HPC news and analysis.

Search Results for: mellanox

Mellanox FDR InfiniBand Powers Stampede Supercomputer

This week Mellanox announced that its end-to-end FDR InfiniBand technology is powering the Stampede supercomputer at the TACC. As the most powerful supercomputing system in the NSF XSEDE program, the 10 Petaflop Stampede system integrates thousands of Dell servers and Intel Xeon Phi coprocessors with Mellanox FDR 56Gb/s InfiniBand SwitchX based switches and ConnectX-3 adapter […]

Video: Mellanox – The Foundation for Scalable Computing

In this video from the HPC Advisory Council Switzerland Conference, Colin Bridger presents: Mellanox: The Foundation for Scalable Computing. Download the Slides (PDF).

HP Taps Mellanox for Low Latency Blade Switch

Over at the HP Blog, Steve Barry writes that the days of Blade servers being hampered by the limitations of top-of-rack switches may be over thanks to new technology from Mellanox. Designed specifically for customers that demand performance and raw bandwidth the Mellanox SX1018HP blade switch provides up to sixteen 40Gb server downlinks and up […]

Mellanox FDR 56Gb/s InfiniBand Powers Fastest Supercomputer in India

This week Mellanox announced that India’s Centre for Development of Advanced Computing (C-DAC) is using the company’s end-to-end FDR 56Gb/s InfiniBand solutions for PARAM Yuva – II, the fastest supercomputer in India. As the premier R&D organization of the Department of Electronics and Information Technology, C-DAC chose Mellanox’s robust, high-speed interconnect solution due to its […]

NASDAQ OMX NLX Selects Mellanox’s InfiniBand Solutions for Core Trading Interconnect

Today Mellanox announced that its high performance InfiniBand solutions have been selected by NASDAQ OMX NLX to enable fast, reliable trading. NLX, the new London derivatives market, will offer a range of both short- and long-term interest rate euro- and sterling-based listed derivatives products, subject to Financial Services Authority approval. InfiniBand will act as the […]

Mellanox Introduces MetroDX for Inter Data Center Connectivity

Today Mellanox announced the availability of new RDMA InfiniBand and Ethernet long-haul interconnect solutions as part of the company’s MetroX product line. MetroX products enable long-reach, high-throughput RDMA connectivity within and between data centers across multiple geographically distributed sites. The company also introduced new MetroDX systems designed for internal data center long reach connectivity or […]

Mellanox Wins Data Center Interconnect Awards for Ethernet Switching and Virtual Protocol Interconnect Technology

Today Mellanox announced it has been presented with two data center interconnect awards from ZDnet China and China Network World. The annual awards recognize vendors and products that facilitate low latency and high efficiency across small businesses and enterprise data centers in Asia. The ZDNet China Best Network Switch was awarded to Mellanox’s SX1024 non-blocking Top-of-Rack […]

Mellanox Financials show Record Growth in 2012

Yet another HPC company announced record results this week with news that Mellanox Technologies did over $500 million revenue in 2012. And while we don’t focus much on financials here at insideHPC, I think it is worth noting that the company achieved 93.2 percent annual year-over-year revenue growth. We are proud of our results for […]

Mellanox Accelerates Teradata Unified Big Analytics Appliance

Today Mellanox announced that Teradata has chosen its InfiniBand interconnect solution to accelerate the Teradata Aster Big Analytics Appliance. Designed for demanding analytics which require high computational power and the fastest data movement, the Teradata Aster Big Analytics Appliance offers up to 19 times better data throughput and performs analytics up to 35 times faster […]

Video: Mellanox Breaks Performance Records, Dominates TOP500 at SC12

In this video from SC12, Todd Wilde from Mellanox describes the company’s recent advancements in high speed InfiniBand interconnects. Infiniband recently became the leading interconnect on the TOP500 with 224 clusters. On the product side, Connect-IB dual-port 56Gb/s FDR InfiniBand adapter recently achieved world record throughput of more than 100Gb/s utilizing PCI Express 3.0 x16 […]