“GIGABYTE servers – across standard, Open Compute Platform (OCP) and rack scale form factors – deliver exceptional value, performance and scalability for multi-tenant cloud and virtualized enterprise datacenters,” said Etay Lee, GM of GIGABYTE Technology’s Server Division. “The addition of QLogic 10GbE and 25GbE FastLinQ Ethernet NICs in OCP and Standard form factors will enable delivery on all of the tenets of open standards, while enabling key virtualization technologies like SR-IOV and full offloads for overlay networks using VxLAN, NVGRE and GENEVE.”
The TOP500 list is a very good proxy for how different interconnect technologies are being adopted for the most demanding workloads, which is a useful leading indicator for enterprise adoption. The essential takeaway is that the world’s leading and most esoteric systems are currently dominated by vendor specific technologies. The Open Fabrics Alliance (OFA) will be increasingly important in the coming years as a forum to bring together the leading high performance interconnect vendors and technologies to deliver a unified, cross-platform, transport-independent software stack.
“We are pleased to start shipping the ConnectX-5, the industry’s most advanced network adapter, to our key partners and customers, allowing them to leverage our smart network architecture to overcome performance limitations and to gain a competitive advantage,” said Eyal Waldman, Mellanox president and CEO. “ConnectX-5 enables our customers and partners to achieve higher performance, scalability and efficiency of their InfiniBand or Ethernet server and storage platforms. Our interconnect solutions, when combined with Intel, IBM, NVIDIA or ARM CPUs, allow users across the world to achieve significant better return on investment from their IT infrastructure.”
Today Mellanox announced the availability of new software drivers for RoCE (RDMA over Converged Ethernet). The new drivers are designed to simplify RDMA (Remote Direct Memory Access) deployments on Ethernet networks and enable high-end performance using RoCE, without requiring the network to be configured for lossless operation. This enables cloud, storage, and enterprise customers to deploy RoCE more quickly and easily while accelerating application performance, improving infrastructure efficiency and reducing cost.
Today the Ethernet Alliance unveiled its 2016 Ethernet Roadmap at OFC 2016. The roadmap highlights Ethernet’s breadth of speeds, current and next-generation modules and interfaces, PoE, and innovations like the OIF’s FlexEthernet, and offers an overview of existing and future modules including QSFP-DD, microQSFP, and OBO; interfaces; and nomenclature at speeds from 10 Mb/s to 400GbE.
While we’re always on the lookout for HPC news, not everything makes it to the front page. Notable items from this week include big boosts for Apache Spark, Containerization, and Lustre.
Today Mellanox announced its ConnectX-4 Lx 10/25/40/50 Gigabit Ethernet adapter, delivering optimal cost-performance and scalable connectivity for Cloud, Web 2.0 and storage platforms. As the first adapter designed to serve as a direct replacement for commonly deployed 10 Gigabit Ethernet adapters, the ConnectX-4 Lx allows businesses to migrate to higher-performance technology as their bandwidth requirements increase without demanding an infrastructure overhaul or added operating expense.
Today Mellanox announced that Monash University in Melbourne, Australia, has selected the company’s CloudX platform to provide the fabric for its new cloud data center.
Federal buyers of high performance networks got a boost this week with the announcement that Mellanox end-to-end InfiniBand and Ethernet interconnect solutions are now available through SYNNEX Corporation’s General Services Administration (GSA) Schedule.
“Converged Ethernet networks are now displacing dedicated compute, storage and data networks of the past in today’s HPC deployments. Top industry institutions; petroleum, human genome, finance and universities have partnered with Extreme Networks to solve their toughest HPC networking challenges.”