Sign up for our newsletter and get the latest HPC news and analysis.

The Fast Data Imperative: An Interview with Mellanox CTO, Michael Kagan

From the beginning, moving data fast has been the key mission of Mellanox. To learn more, we caught up with the company’s CTO, Michael Kagan in this special feature from the Print ‘n Fly Guide to SC13 in Denver.

insideHPC: With the 25th anniversary of the SC conference this year, could you please share your perspective on how far we’ve come in system interconnects over the past quarter-century?

מיכאל קגן mellanox מלאנוקס יוקנעם צילום : משה שיMichael Kagan: As technology has evolved, the interconnect has played a paramount role to bringing new concepts and enabling new levels of system performance, efficiency and scalability. We have witnessed the migration of HPC interconnect technology into Web 2.0, cloud, and data intelligence systems to handle the explosive growth of data. We are in the era where the interconnect is critical for so many applications. For instance, real-time data analytics of unstructured data on a global scale was only a concept 25 years ago, however today, real-time access to data is literally a world economic dependency. When you think about it in these terms, the interconnect is just as important to our global economy as water is to the fish in the sea.

insideHPC: In terms of today, how well-entrenched is the latest generation of FDR InfiniBand out there in the TOP500 supercomputers?

Michael Kagan: Mellanox FDR InfiniBand systems grew over 3X from June’12 to June’13. We are pleased with the adoption of our technology and believe it will continue to be the performance leader in the TOP500. As Petascale capable computing is becoming more common-place in the TOP500, InfiniBand is also the leading interconnect with 16 out of the 33 systems in the June 2013 list, and what’s more, FDR InfiniBand connects the four fastest InfiniBand systems on the list. From our perspective, we believe the TOP500 is a good metric to show that the HPC community has a high regard for industry-standard, high performance interconnect solutions that are well supported by an ecosystem of open-source

insideHPC: What comes after FDR? Will your ConnectX Ethernet likely take a leap at that point as well?

Michael Kagan: The next generation 100Gb/s technology comes from the architectural building blocks of ConnectX-3 VPI and Connect-IB FDR technology, and is planned to be released in the 2014-2015 time frame. Both, our InfiniBand and Ethernet products, deliver the best deliver the best price/performance in the market and we see increased adoption of both. In fact, Mellanox sold more than one million end-points last year; around 10% of the server market.

insideHPC: The SC13 conference is loaded with sessions on Exascale. Will InfiniBand have a role to play as we get to 10^18th flops sometime in the next 7-10 years?

Michael Kagan: We believe we are point-focused on Exascale and are on target to address the challenges it presents. This includes advanced data protection mechanisms, support across the various network topologies, instruction set architectures and acceleration technologies, improving increased bandwidth and providing the lowest possible latency, but particular attention on improving performance at scale will be a key issue. We participate in several of the Exascale programs and plan to continue and invest in delivering faster and faster interconnect solutions to pave the way for Exascale.

 

insideHPC: Mellanox has made some Photonics acquisitions recently. Is this the future and are we reaching the limits of silicon?

Michael Kagan: Yes, Mellanox has made two significant acquisitions in the area of silicon photonics and VCSEL based technologies. We acquired a company called Kotura, which brings a world-class team of silicon photonics expertise. We also recently acquired IPtronics, a leading fabless supplier of low-power, high-speed analog semiconductors for parallel optical interconnect solutions. We believe these investments are strategic to enable 100Gb/s data rates and beyond.

insideHPC: What happens to Mellanox if x86 vendors start building InfiniBand interfaces directly onto their processors?

Michael Kagan: It is all about bringing the best solution to the market. To date, Mellanox has delivered 7 generations of advanced interconnect solutions and is a generation or more ahead of the competition. We believe that we will maintain and even extend the gap into the future. Only Mellanox provides a complete end-to-end solution for 40 and 56Gb/s, and it is not an easy task. Furthermore, once you add the software complexity into the mix, you will see that it is not easy for other companies to bring the solutions that Mellanox has today.

insideHPC: As CTO, what is your toughest challenge in pushing the limits of communications technology? (Resiliency, Component cost, Bandwidth, Power consumption?)

Michael Kagan: The toughest challenges are usually only uncovered when you actually try pushing the limits of technology, which coincidentally is a driving force at Mellanox. By far, what is the most challenging is to truly understand the problem for which you are trying to provide a solution. There are many adjacent issues that need to be considered; such as security in the cloud, scalability to hundreds of thousands of end-points and power efficiency to name a few.

This story appears in The Print ‘n Fly Guide to SC13 in Denver. We designed this 24-page Guide to be an in-flight magazine custom tailored for your journey to the Mile-High city at SC13.

Contents

Resource Links: