The high-performance networking market just got a whole lot more interesting, with Intel shelling out $125m to acquire the InfiniBand switch and adapter product lines from upstart QLogic.
Intel has made no secret that it wants to bolster its Data Center and Connected Systems business by getting network equipment providers to use Xeon processors inside of their networking gear – that Intel division posted $10.1bn in revenues in 2011, and the company wants to break $20bn in the next five years.
The plan is to kill off mainframes and RISC machines, and to get Xeons inside of storage and network gear – but it also includes Intel being a major supplier of chips used in high speed switches.
Last July, Intel paid an undisclosed amount to get its hands on Fulcrum Microsystems, a maker of the FocalPoint family of ASICs for Ethernet switches and routers that run at 10GbE and 40GbE speeds. Fulcrum’s most famous customer was Arista Networks, the low-latency networking switch-maker founded by Sun Microsystems cofounder Andy Bechtolsheim. Intel never said what it paid for Fulcrum, but the company had raised $102m in venture capital since it was founded, and the price was very likely a multiple of that figure.
Despite the improvements in 10GbE and 40GbE switch chips over the past several years, InfiniBand still has important niches where even lower latency and still higher bandwidth are crucial – the supercomputing racket, for instance, or in database clustering. Just ask Oracle, which uses InfiniBand silicon from Mellanox Technologies in its Exadata database clusters and Exalogic web application server clusters, and which took a 10.2 per cent stake in the chip and switch-maker back in October 2010.
At the time, Mellanox assured Wall Street that Oracle had no intention of taking over the chipmaker, but with QLogic’s upstart InfiniBand biz snapped up by Intel, some systems or networking companies might now be tempted to take a run at Mellanox. But if Oracle or IBM or Cisco Systems are tempted to eat Mellanox, all that will do is eventually drive everyone into the loving arms of Intel, with its own Ethernet or InfiniBand ASICs. So, in a funny way, Intel is probably praying that someone does eat Mellanox.
And the funniest thing of all would be if AMD actually woke up and smelled the systems biz, and did it. By doing so, AMD would have the SwitchX two-timing Ethernet and InfiniBand ASICs and the ConnectX-3 switch-hitting server adapters, and could start integrating these deeper into its chipsets and eventually onto its chips.
Intel and InfiniBand go way back
InfiniBand has its roots in the Next Generation I/O project supported by Intel, Sun Microsystems, and Microsoft, along with the Future I/O alternative supported by IBM, Compaq, and HP. These specs were merged back in 1999, with Intel and IBM largely steering the process.
The idea was to provide a single switched fabric that would link computers and storage to each other from the desktop to the data center, and be an alternative to Ethernet networks for server-to-server and PC-to-server links, and to PCI-Express and Fibre Channel for linking peripherals.
Academically, InfiniBand was probably the right answer for a unified switch fabric – but markets don’t study in schools, they live on the mean streets and give and take hard knocks. And thus, InfiniBand has been relegated to a niche and, more importantly, the key technologies that made InfiniBand better, stronger, and faster than Ethernet have been borged onto Ethernet, closing the gap.
For now, Intel is saying that its acquisition of the InfiniBand chip, adapter, and switch business from QLogic is all about HPC, but it may be looking further down the road, when PCI-Express runs out of gas.
“At the International Supercomputing Conference 2011, Intel unveiled a bold vision to redefine HPC performance and break the exascale barrier by 2018,” said Kirk Skaugen, the outgoing general manager of Intel’s Data Center and Connected System Group, said in a statement. “The technology and expertise from QLogic provide important assets to provide the scalable system fabric needed to execute on this vision. Adding QLogic’s InfiniBand product line to our networking portfolio will bring increased options and exceptional value to our datacenter customers.”
Last week, Skaugen – who has been pushing Intel’s expansion into switching and storage chippery for the past several years – was tapped to run Chipzilla’s PC Client Group. Diane Bryant, who has worked for Skaugen in the past and who was most recently Intel’s CIO, has replaced Skaugen and will be driving Intel’s server, storage, and networking strategies.
By selling its InfiniBand biz to Intel, QLogic will be able to double down on its Fibre Channel and Ethernet switches and adapters. QLogic has had some success with its InfiniBand gear, landing the 2,000-node “Sierra” cluster with Dell at Lawrence Livermore National Labs and also being the switch supplier for the 20,000-node procurement awarded to Appro International last June by the US Department of Energy’s Tri-Labs: Lawrence Livermore, Los Alamos, and Sandia National Laboratories.
“The sale of these InfiniBand assets will benefit our shareholders by enabling us to provide better focus and greater investment in growth opportunities for the data center with our converged networking, enterprise Ethernet, and storage area networking products,” said QLogic’s president and CEO, Simon Biddiscombe, in his statement. “After the sale, our cash position will be further strengthened and we expect the impact on earnings per share to be neutral. In addition, the sale of these assets to a leading technology innovator and recognized HPC leader will provide a greater investment stream in high performance fabrics for InfiniBand partners and customers.”
Speaking to El Reg two weeks ago apropos of nothing about the InfiniBand racket, QLogic’s head of global alliances and solutions marketing for HPC Joe Yaworksi said that the reason why QLogic was winning more InfiniBand deals is that its TruScale chips offer better performance running at Quad Data Rate (QDR) 40Gb/sec speeds than do Mellanox’ SwitchX products running at Fourteen Data Rate (FDR) 56Gb/sec speeds.
The big reason for this, said Yaworksi, was that QLogic bought compiler-maker PathScale in early 2006, and it has a networking stack that was designed to handle millions of messages per second. (PathScale was sold to SciCortex in 2007, and when it went bust, Cray picked up the PathScale pieces in 2009 and an open source PathScale has emerged from the ashes with a license from Cray.) The combination of the TruScale InfiniBand ASICs and PathScale messaging stack and compilers is what gave QLogic the idea it could take on Mellanox and win.
Yaworksi told El Reg that QLogic was “taking a hard look at whether or not we will ship FDR InfiniBand,” although with Intel picking up the company, there will be more funds to do whatever might seem appropriate. The company was thinking that in the second half of 2013 or the first half of 2014 it might jump straight to Eight Data Rate (EDR) speeds, which runs the InfiniBand lanes at 25Gb/sec.
That would be a long time to wait between products and to live on QDR, and a gap that Intel is probably not likely to tolerate. But it all depends on what Intel’s plans are, and the company isn’t saying anything right now. If QLogic weren’t a public company, both would have probably said less.
Intel expects the QLogic InfiniBand deal to close by the end of March, and added that a “significant number” of the employees associated with the business were expected to accept job offers from Chipzilla. ®