Interview: EXTOLL to Demo Ultra-Low-Latency Interconnect at ISC’13

Print Friendly, PDF & Email

It has been a while since the folks from the EXTOLL project in Germany announced their venture to develop an ultra-low latency interconnect technology for supercomputing. With ISC’13 coming up, I caught up with R. Mondrian Nuessle from Extoll to discuss their plans for the technology and their exhibit at ISC.

insideHPC: What is the EXTOLL interconnect and who is the target user of this technology?

Mondrian Nuessle: The EXTOLL interconnect technology was specifically developed for High Performance Computing. It aims at minimizing the communication overhead between nodes by optimizing the whole communication stack from the physical layer all the way up to the application interfaces like MPI.

insideHPC: How does EXTOLL differ from commodity technologies currently available out there?

Mondrian Nuessle: EXTOLL technology tops commodity technologies by virtually all metrics relevant for HPC including latency, message rate, and bandwidth. Users’ benefit depends on the particular application, but typically a speed up by a factor of 2 will be experienced. This is achieved by an ultra low latency of 600ns, a message rate of more than 100 million messages per second and a bandwidth of 120 Gb/s per link. Each host adapter features 6 bi-directional links of 120Gb/s each, as well as an integrated low-latency message router.

To form an EXTOLL network, EXTOLL adapters are plugged directly together forming for example a 3D torus topology. Thus, the EXTOLL interconnect technology is designed to be a direct network rendering external switches obsolete. This alone will allow customers to realize significant OPEX and CAPEX savings. The EXTOLL interconnect also implements a lot of different technologies to optimally support HPC work loads. Amongst them are low-latency messaging services, high-bandwidth bulk transfers, hardware implemented barriers and multicast, deterministic and adaptive routing, a large amount of reliability features and many more. In summary, the EXTOLL technology is optimized for HPC from the start with no trade-offs! This enables customers to close the gap between commodity clusters and dedicated MPP HPC systems. So in one sentence one can say, by using EXTOLL technology users will get the features, performance and benefits of MPPs for the price tag of commodity clusters.

insideHPC: Does your software stack support MPI? Will your software be open source?

Mondrian Nuessle: Yes, of course. The EXTOLL software stack supports MPI as a “premiere citizen”. From an OS perspective, EXTOLL will focus on Linux first. Linux kernel drivers as well as the low-level API libraries and the MPI integration will be released as open-source. One of the first MPI distributions that will be supported is OpenMPI.

But the EXTOLL software is not uniquely focused on MPI. Support for other communication middlewares and runtimes is under development. An example is GASNET. TCP/IP transport service will be available, too.

insideHPC: Are you still in the prototype stage or is the technology currently available?

Mondrian Nuessle: The EXTOLL ASIC is just in the tape-out stage. First silicon will be available around mid of 2013. Prototypes are based on FPGAs and are fully functional. These prototypes including the beta software stack are out in the field and show performance that is comparable to leading commodity products in many regards, although the raw punch of the FPGA is at least a factor of 4 less than the targeted ASIC technology.

insideHPC: What will you be showcasing at your booth during ISC’13?

Mondrian Nuessle: First of all we will be demonstrating the EXTOLL interconnect with industry standard servers in cooperation with Thomas Krenn AG and NVIDIA. One other thing we will be showing is EXTOLL’s direct GPU-to-GPU communication. One GPU directly communicates with and accesses the memory of a second GPU via the EXTOLL network without involving the host CPUs. This dramatically improves Inter-GPU communication, with savings in energy and time. This new technique is in particular useful with recent Nvidia features like Dynamic Parallelism and GPUDirect RDMA. It addresses the increasing use of accelerators in HPC.

We will also be presenting our 12x active optical cables (AOC). This cable features an electrical connector that can be plugged directly into any electrical connector of EXTOLL cards. Depending on the length of links, users can choose to use EXTOLL AOC or electrical cabling. Moreover, EXTOLL is used within the EU funded FP7 Project DEEP for the BOOSTER interconnect and first BOOSTER node hardware will be presented in cooperation with Eurotech at the Eurotech booth and at the booth of the Jülich Supercomputing Center (JSC).

insideHPC: Why is ISC’13 an important event for you as you commercialize this company?

Mondrian Nuessle: The best way to commercialize a new product or even a company is to be at the right place at the right time. ISC is definitely among the “hot” places for HPC. There is a perfect opportunity to meet trade partners, get their personal feedback, initialize/continue negotiations and become aware of upcoming developments. While SC is the premier venue for the US market, ISC is inevitable to talk to European custmers and partners. And for EXTOLL as a German company, we are especially happy to be able to attend this event in Germany.