Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Mellanox ConnectX-5 Sets DPDK Performance Record with 100Gb/s Ethernet

Today Mellanox announced that its ConnectX-5 100Gb/s Ethernet Network Interface Card (NIC) has achieved 126 million packets per second (Mpps) of record-setting forwarding capabilities running the open source Data Path Development Kit (DPDK). This breakthrough performance signifies the maturity of high-volume server I/O to support large-scale, efficient production deployments of Network Function Virtualization (NFV) in both Communication Service Provider (CSP) and cloud data centers. The DPDK performance of 126 Mpps was achieved on HPE ProLiant 380 Gen9 servers with Mellanox ConnectX-5 100Gb/s interface.

We have established Mellanox as the leading cloud networking vendor, by working closely with 9 out of 10 of hyperscale customers who now leverage our advanced offload and acceleration capabilities that boost total infrastructure efficiency of their cloud, analytics, machine learning deployments,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “We are extending the same benefits to our CSP customers through a distinctive blend of enhanced packet processing and virtualization and storage offload technologies, enabling them to deploy Telco cloud and NFV with confidence.”

The I/O intensive nature of the Virtualized Network Functions (VNFs) including virtual Firewall, virtual Evolved Packet Core (vEPC), virtual Session Border Controller (vSBC), Anti-DDoS and Deep Packet Inspection (DPI) applications have posed significant challenges to build cost-effective NFV Infrastructures that meet packet rate, latency, jitter and security requirements. Leveraging its wealth of experience in building high-performance server/storage I/O components and switching systems for High Performance Computing, Hyperscale data centers, and telecommunications operators, Mellanox has the industry’s broadest range of intelligent Ethernet NIC and switch solutions; spanning interface speeds from 10, 25, 40, 50 to 100Gb/s. In addition, both the Mellanox ConnectX series of NICs and the Spectrum series of Ethernet switches feature best-of-class packet rates with 64-Byte traffic, low and consistent latency, and enhanced security with hardware-based memory protection.

In addition to designing cutting-edge hardware, Mellanox also actively works with infrastructure software partners and open source consortiums to drive system-level performance to new levels. Mellanox has continually improved DPDK Poll Mode Driver (PMD) performance and functionality through multiple generations of ConnectX-3 Pro, ConnectX-4, ConnectX-4 Lx, and ConnectX-5 NICs.

As CSPs deploy NFV in production, they demand reliable NFV Infrastructure (NFVI) that delivers the quality of service their subscribers demand. A critical aspect of this is making sure the NFVI offers the data packet processing performance required to support the service traffic,” said Claus Pedersen, director, Communication Service Provider Platforms, Data Center Infrastructure Group, Hewlett Packard Enterprise. “The HPE NFV Infrastructure lab has worked closely with Mellanox to ensure that HPE ProLiant Servers with the Mellanox ConnectX series of NICs will enable our CSP customers to achieve the scale, reliability and efficiency they require of their NFV deployments.”

Sign up for our insideHPC Newsletter

Resource Links: