Interconnects and Exascale: An Interview with Intel Fellow, Shekhar Borkar.

Print Friendly, PDF & Email

Click Here for the Audio Interview with Shekhar Borkar from Intel

The Exascale Report: Why don’t we start off at a very basic level. Let’s talk about the definition of interconnects. What do you mean when you talk about interconnects?

Borkar: So, you know Mike, interconnect means different things to different people – and interconnects start from inter-connecting two transistors on a chip – or on a die – all the way to connecting cities by means of fiber or wire – over distances of kilometers – so interconnects really span this entire spectrum – from micrometer to kilometers.
But now, let’s stick to HPC. HPC in the past, when you talked about interconnects – it really meant the traditional HPC interconnects such as the interconnect fabrics for supercomputers like meshes or butterflies – or whatever – whatever topology you can think of, but now, in this era – it’s a lot broader than just an interconnect fabric. When I say interconnect today, I mean an interconnect that connects transistors together, that connects cores together on the chip, that connects regions together on the board, and connects racks and board together. A tenth of a kilometer maybe is the largest distance I’m talking about – even less to micrometers – to me that’s an interconnect – how to move data around within these entities and over these distances.

TER: So that’s a much larger definition than most people use when they are in this discussion.

Borkar: Yes, and that’s why in HPC – especially in the “exa” – you really have to broaden the definition of interconnects.

TER: At Intel, where does the responsibility for research and development related to interconnects fall?

Borkar: There is quite a bit of research we are doing inside Intel labs encompassing different interconnect technologies. Copper based as well as photonics based – with their own merits – their own benefits – as well as their own problems and the cost associated with each. So the fundamental research is happening here at Intel labs. Quite a bit of other research is happening at the manufacturing and technology group, and some of the development is happening in the product groups.

TER: So, let’s change direction just a bit. What would you say is the single most difficult challenge that your researchers face today regarding interconnect technology – and what you need to do with it in order for us to move forward toward exascale-class systems?

Borkar: So, for HPC, the number one problem that I see in the future is the energy. The number two problem that I see is the energy, and the number three problem that I see is…the energy.

In the past, it was a little different. The number one problem we had in HPC (in the past) was the cost of computation – not in terms of energy – and in fact, in terms of dollars. Today, the dollars are there – as a problem – in terms of energy. So, if I look into the future, it’s going to be a lot less expensive to do a computation than to move data. In the past, it was different; the computation was expensive. In terms of energy, in terms of dollars, performance, – all those things.

And by the way – for a long time, interconnects were like a bunch of second class citizens – a bunch of engineers sitting over there in the corner connecting a lot of chips and the boards together – and the processor guys were the first class citizens. It’s changing. It will be changing.

TER: So how do you apply interconnect technology to deal with data locality?

Borkar: Well, now you have to be a little smarter – all the way throughout the stack. For example, let’s look at the applications level. When you go back and start looking at the algorithms – well go back and start looking at discrete fourier transform for example– all those algorithms, they were devised to reduce the amount of computation. There’s a little talk about communication, but you know, communication was sort of like free – and the latency was important – from a performance point of view, but really, all these algorithms were how can I reduce the number of floating point operations I have to do for this particular task. Nobody really talked about how can I reduce the amount of data movement. It’s a totally different animal. And you know one thing? – I think we are a little late right now – in the sense of fundamental computer scientists and algorithmists – to go figure out new algorithms. We do exactly the opposite – the computation is cheap. If you ask me for exascale, I’m willing to put as many floating point units as you want, but please, don’t make me move data over a centimeter. It’s expensive.

TER: So, isn’t it as likely – that interconnect technology will be part of the problem – and not part of the solution – in trying to achieve exascale?

Borkar: Yes – and no. It is a problem. I look at the number one and number two problem – after energy – is the interconnect – because when you talk about interconnect and moving data from point A to point B – you have fundamental laws of physics governing the transfer of data from A to B, and you know – it’s really difficult to bend those rules. Don’t even try to break them – bending them is difficult enough. So what did we do in computation? We did something really nice – and nifty. We started scaling. So the distances and the areas started shrinking – so the energy use went down proportionally. You know for interconnect – I’m keeping the distances the same. If we’re looking at an exascale computer, it probably will have the same size – approximately – as what a petascale computer is today. So I still have to communicate and move data of some magnitude – over 100 meters. In petascale I have to do it – and in exascale I’ll have to do it. That distance doesn’t change. Because I’m greedy – I want exascale performance. For one reason or the other – because I’m human.. I want it.. I need it..there are lots of problems to be solved – and for that I need exascale performance.

TER: Can you comment on the resources that Intel has applied to research in the area of interconnect?

Borkar: Sure. Interconnect has been a number one citizen within Intel labs because we realized that a long time ago. So, we have several research teams working on both electrical interconnects as well as photonics-based interconnects. And you know, to tell the truth, their job is to put each other out of business. That’s what we like to do. Whichever technology is the one to show the benefit – that’s the one we’ll use. So for example, if you go back a long time ago, the distance over which the photonics would benefit was on the order of several meters – below which you would definitely use electrical communications. For photonics it was ten meters or so. That distance has come down. I believe it’s on the order of one to two meters. Now we can argue whether it’s one meter or two meters – but it’s definitely less than ten meters. So we have that kind of research going on – on both photonics as well as electrical communications.

TER: What would you say is the most promising development that you have seen recently in this area?

Borkar: Oh, there has been a tremendous amount of developments over the last several years in both electrical communications as well as photonics communications. For example, in photonics you see various lasers coming out – lasers being incorporated inside silicon – to improve their mode of integration because integration is good – it improves reliability and so forth. You do wavelength division multiplexing now – so over a single fiber you can have multiple light beams going through. You can see new resonators coming out.

In electrical, you can see all kinds of signal processing techniques being used . Something we didn’t even think about ten to fifteen years ago. Why? Because transistors are cheap. I can do signal processing per pin – every pin has a signal processor in it. So, lots of advancements in circuits, in technology, in materials, etc. So, just to give you an idea, about ten years ago, the amount of energy it took to move a bit over copper was on the order of maybe 50 to 100 picojoules per bit. Today in research, we have achieved two picojoules per bit – and at about 10-15 gigabits per second. So high data rate – low energy. Just like the beer (smile).. tastes great ..and less filling.

TER: So any closing thoughts on interconnect technology and the drive towards exascale. Do you see paths of convergence or do you see brick walls?

Borkar: Well, yes and no. I do see brick walls if you decide not to change. If you stay with business as usual – you are going to hit a brick wall. Some of us researchers are cautioning the community – saying that business should NOT be as usual – in which case you will succeed – and here is how for example.

So Mike- you’re a supercomputing guy – so you know. In the past when we talked about supercomputers, we talked about the fabric and we had the notion of bi-section bandwidth. Bi-section bandwidth was important because communication was energy free. We didn’t even talk about energy. So you had a tremendous amount of bi-section bandwidth – you cross it any which way – it was homogeneous – it was there whenever I needed it. Today, if you try to put this homogeneous bi-section bandwidth – the entire computer would melt. You can’t afford it. Since you can’t afford it – now you have to go and start thinking about how can I do without a constant bi-section bandwidth? Now you are talking about a totally different topology of interconnection scheme for the supercomputers. What we call a tapering- that means as you go up and up in the hierarchy – you get less and less bandwidth. We will provide you with more bandwidth – but don’t use it! Or use it at your own risk. So, it’s a tapered bandwidth. It’s hierarchical. It’s not homogeneous – different kinds of fabrics at different locations . In the past if it was a mesh – it was a mesh. If it was a torus – it was a torus – all throughout the system. But not any more. Over a millimeter – maybe a bus – or a centimeter – maybe a bus or a crossbar. When you go beyond a centimeter – maybe a cross point switch somewhere – maybe a mesh after that – maybe a fat tree? We haven’t really looked at locality and hence the research.

Shekhar Borkar graduated with an MSEE from the University of Notre Dame in 1981 and joined Intel corporation. He worked on the 8051 family of microcontrollers and subsequently on Intel supercomputers. Shekhar led development of communication fabric for the iWarp multicomputer, and the interconnect components of the first TFLOP machine–ASCI Red. He is now the principal investigator of the DARPA funded UHPC project. Shekhar’s research interests are high performance, low power circuits and high speed signaling, he is a Fellow of IEEE, and has published over 100 papers in conferences and journals.