Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Slidecast: Announcing Mellanox ConnectX-5 100G InfiniBand Adapter

connectXIn this slidecast, Gilad Shainer from Mellanox announces the ConnectX-5 adapter for high performance communications.

“The new ConnectX-5 100G adapter further enables high performance, data analytics, deep learning, storage, Web 2.0 and more applications to perform data-related algorithms on the network to achieve the highest system performance and utilization,” said Gilad Shainer, vice president, marketing at Mellanox Technologies. “Today, scalable compute and storage systems suffer from data bottlenecks that limit research, product development, and constrain application services. ConnectX-5 will help unleash business potential with faster, more effective, real-time data processing and analytics. With its smart offloading, ConnectX-5 will enable dramatic increases in CPU, GPU and FPGA performance that will enhance effectiveness and maximize the return on data centers’ investment.”

According to Shainer, ConnectX-5 is the most advanced 10, 25, 40, 50, 56 and 100Gb/s InfiniBand and Ethernet intelligent adapter on the market today. ConnectX-5 introduces smart offloading engines that enable the highest application performance while maximizing data center return on investment. It is also the first PCI Express 3.0 and 4.0 compatible adapter, enabling greater flexibility and future-proofing for the data center.

Highlights:

  • ConnectX-5 enables greater HPC performance with new Message Passing Interface (MPI) offloads, such as MPI Tag Matching and MPI AlltoAll operations, advanced dynamic routing, and new capabilities to perform various data algorithms. ConnectX-5 delivers the highest available message rate of 200 million messages per second, which is 33 percent higher than the Mellanox ConnectX-4 adapter and nearly 2X compared to competitive products.
  • ConnectX-5 supports PCI Express 3.0 and 4.0 connectivity options, and includes an integrated PCIe switch. For upcoming PCI Express 4.0 enabled systems, ConnectX-5 will deliver an aggregated throughput of 200Gb/s.
  • ConnectX-5 Accelerated Switching and Packet Processing (ASAP2) technology enhances Open V-Switch (OVS) offloading, which results in significantly higher data transfer performance without overloading the CPU. Together with native RDMA and RoCE support, ConnectX-5 will dramatically improve Cloud and NFV platform efficiency.
  • For storage infrastructures, ConnectX-5 introduces new acceleration engines for NVM Express (NVMe). NVMe over Fabrics (NVMf) enables end-users to connect remote subsystems with flash appliances, leveraging RDMA technology to achieve faster application response times and better scalability across virtual data centers. Storage applications will see improved performance and lower latency with the advanced NVMf target offloads that ConnectX-5 delivers.

IDC research continues to show that InfiniBand is a network of choice for the world’s fastest HPCs. Indeed, in a recent IDC study, 30 percent of HPC users surveyed indicated that InfiniBand was the interconnect used in their fastest HPC,” said Bob Sorensen, research vice president, IDC. “Demands for growing HPC performance dictate that more efficient, capable, and intelligent networks will soon be needed to help manage the complex flow of data within a processor/storage fabric, relieve the computational processors from the overheard of data communications housekeeping, and add additional processing and network dataflow support within the network where and when it is needed. Mellanox is taking the first critical steps in formulating and realizing that vision, and the company is signaling that it will remain committed to adding new and higher performance functionality to its smart interconnect product lines.”

View the Slides * Download the MP3 * Subscribe on iTunes Subscribe to RSS 

Resource Links: