Research HPC: An Interview with Intel’s Mic Bowman

As all eyes turn to the horizon in search of exascale, one has to wonder how the computational infrastructure will change by the end of this decade. One very exciting possibility is presented by the rich simulation and interface technologies being explored as part of what Intel calls the 3D web.

From corporate and social gaming to immersive educational programs, virtual worlds and next generation interfaces will soon have an impact on many different aspects of society. To get a better understanding of what this is all about, we interviewed Mic Bowman, a principal engineer in Intel Labs and head of Intel’s Virtual World Infrastructure research project.

The Exascale Report: Mic, thanks for joining us today. A few years ago, at SC09 in Portland, Oregon, Justin Rattner (Intel CTO) gave the keynote address on the topic of 3D Internet, and I know you worked closely with him on that speech. What were some of the key points that you recall Justin making in that keynote?

Bowman: Well I think that the main message Justin was trying to get across is that there is value to consumers in all the work that we are doing in high performance computing and that value tends to express itself in what we call the 3D web. It’s rich simulation for the purposes of gaming, and its rich simulation for creating these immersive environments in which people can interact with each other. Those immersive environments behind them tend to be driven by very high end simulation. I know that one of the demonstrations that we showed at SC09 was for things like cloth simulation. In order to be able to drive new waves of the fashion industry – both manufacturing those products and selling those products online – getting realistic looking garments that you can try on virtually and see how they fit – addresses one of those big needs of online retail sales.

The Exascale Report: I do remember that demo and it was quite impressive. So, am I being naïve or under-playing the value of this if I think of this as an aspect of visualization for the supercomputing community?

Bowman: No – and it’s a very appropriate way to think of it. The 3D web tends to be the visual output of many of these simulations, some of which are targeted again as we said at these kind of commercial applications or consumer applications. But , more generally than that, there’s a phrase that we use in our work here – it’s collaborative visualization. Typically what we see in kind of a high performance computing scenario is there’s a very high-end offline simulation, there’s some detail reduction on that, and then some rendering of that into an image. So, you might get some really neat movies out of galaxies colliding or the effect of water on fluids. These 3D web applications have some different requirements in that they’re real time and interactive. So, what we need is not at the end of the simulation to generate these highly rich visuals but what we need is during the simulation to interact with each other and with the simulation itself.

The Exascale Report: So is there a gap between the way the 3D visualization is being done today and what you are trying to do with your research program?

Bowman: Yeah, the gap is really the difference between kind of offline and interactive real time – that most of what we do today, and this is a generalization, but most of what we do today is this offline, post processing of the data that comes out of the simulation. And much of what we’re trying to drive, especially because of the consumer focus on one hand, but also because the simulations themselves are valuable, is real time and interactive. And, it’s multiple perspective. It’s people interacting with people within the context of the simulation of the data or the computation that’s taking place.

The Exascale Report: So I think when most people think of this – they think of social environments –they think of gaming or entertainment – but let’s talk about some serious applications here. What are some of the more serious uses of this? I don’t meant to say “serious” in that games aren’t serious, they certainly are for a lot of people and it’s a huge industry, but in terms of other types of applications.

Bowman: Well, before you leave the gaming space, the games themselves can be very serious. So, for example, one of our collaborations, and we discussed this in the SC09 keynote, is a game that we call Water Wars. It’s essentially based on the Sandia National Labs hydrology simulator – and we built a game on the front end of it on order to make that data that was being produced by simulation more real and intuitive to the people who were interacting with it. And so we presented that game to a bunch of high school students in New Mexico and allowed them to play different roles – developers and farmers and ecologists and environmentalists that are interacting with real water management issues that are developed by that simulation. Even the games themselves can be serious. At the other end of the spectrum, we have things like some work that we’re doing with Utah State on the use of these engines for doing real research. So, there’s a use for configuring and interacting with an evolutionary simulator. So, the researcher configures the terrain, configures the environmental conditions, describes the soil type in one of these environments, starts the simulation in the backend, and is able to see in real time the evolution of the population. In his case, he is using ferns* but it’s a fairly general simulation engine on the back end that could be used for studying, for example, the environmental impact on evolution of populations for any type of animal.

* “Fernland” is a simulated population of more than 100,000 ferns in an environment with customizable terrain, soil, weather, seasons and physics.

The Exascale Report: Very interesting. As you were speaking, I was thinking of the old movie, War Games. But I would assume that today, gaming for military is huge.

Bowman: Absolutely. And one of our primary collaborators in this is the Army Research Lab. There’s a project called Moses that’s primarily there to provide that interactive environment. And in these environments there’s really multiple objectives so you have this sort of first-person games that we all play on the X-Box. But at the other end, one of their objectives is these open-ended simulations – where you’re not learning so much as a specific environment as you’re learning creative thinking and intuition. So it’s the ability to understand the culture a situation and the environment in that situation in order to learn how to interact and how to react.
So, these military situations tend to be focused, especially with these 3D web applications, on critical thinking – on intuition, more than on the specific details of how to fire an M16.

The Exascale Report: How about for the finance industry or economics, do you see some applications there for gaming?

Bowman: There are two areas where we see the finance industry as being potentially related to this. One is – these 3D web environments provide a very rich platform for creating visualizations. So, there’s sort of a basic set of programmable elements that allow us to create very rich, three dimensional visualizations. So we have one simulation engine that we use for identifying clusters of documents and things that you could do are like news stories and others. So, you can interact with the space and actually manage very large collections of documents that are actively with groups of people.

At the other end, and this sort of goes back again to serious games like Water Wars, it’s relatively easy to embed an economics simulation in the back of these virtual environments. So, you’re just adding one more simulation environment – one more parameter to the simulation with which you are interacting. So you can create in these virtual environments these economies that go with them. Now I will say that those economies are a little farther away from high performance computing because they tend to be real transactions as opposed to very large scale simulations of the transactions.

The Exascale Report: Do you see an application maybe in terms of forensically studying economic conditions around the globe?

Bowman: There are many of these applications that are potential. Let me try to generalize just a little bit of where we’re going with this. That is, whether it’s finance or evolutionary biology, or climate management or climate understanding, all of these are basic problems where there is a large computation that’s happening someplace and there are results being created that we need to understand. And it’s more than understanding just the raw numbers – it’s creating the intuition behind those numbers. So there are very good ways of creating the output in order to get that basic understanding of the data. But to get the intuition – seeing relationships between things and seeing interactively those relationships is critical.

And I will say again, in this 3D web domain, the other critical aspect of this is that it’s social. Many of the existing visualizations are really targeted at a single individual. So, I’m going to create a video stream output and one person is going to look at that – and they may send some email to their collaborators later. These 3D web applications tend to be ‘we’re all in the data together’ and each one of us is potentially looking at if from a different angle and different perspective. And, we’re real-time interacting with that data. And so we may change as a result of what we see – we may change the simulation itself and change the parameters of the simulation to better fit our understanding of what’s going on with the data.

The Exascale Report: That makes a lot of sense. So is there a huge role in this in the training and education communities as we move forward?

Bowman: By far, the most successful set of applications to date are in that training and education space. And it’s a very, very broad space. It goes from, as we mentioned, high school students in New Mexico trying to get a better feel for how serious the water management problems are in their environment . And at the other end of the spectrum, we’ve had a number of discussions with some people who are very much into historical reenactment. And we have a project that we tinker around with that’s a recreation of the Gettysburg National Battlefield.

What we’re trying to do, and that particular example, is to create a game where people can create the reenactment themselves. And so it’s things like basic tools for capturing the movement of the real people who are doing those reenactments that they do every year on the battlefield. People being able to take their pictures and identify the placement of different troops or weapons or the environment. It’s being able to experiment with and understand the movement of the troops. And then do ‘what if’ scenarios on that. ‘What if’ one of the generals had not advanced but in fact had stayed back a little longer and maintained his defensive position – what would have happened? How would the battle have changed? So these environments provide a really rich way for a single individual to sort of go through and understand what Pickett was thinking when he was standing there looking up at Cemetery Ridge on July 3rd, 1863. But it also allows hundreds of people to go back and reenact that in a fairly realistic environment with the realistic simulation behind it. And so these 3D training and education examples tend to not be focused on rote learning as much as on intuition and understanding the situations. It’s like I said – when you go to Gettysburg and you stand where Pickett started his charge and you look up the hill, you realize the desperation of the South and their need to succeed, and you understand it in a way that you don’t by just reading a book.

The Exascale Report: Now you refer to this as ‘virtual worlds infrastructure’ – did I get that right? Can you give me a label for all of this?

Bowman: Well the label that we use for it is 3D web. Our particular group inside Intel Labs is working on infrastructure for virtual worlds or virtual environments. Our goal is to make Open Source implementations so that the community at large can experiment with applications in this space. We focus on technologies for scalability. We want to make it possible for people to apply as much of the right computing and communications resources to the problem so they can create the environment that they want.

The Exascale Report: What kind of resources are you throwing at this now?

Bowman: The 3D web is one of the Intel Labs research imperatives. It’s funded at the highest level. We have several groups working on different aspects of it – everything from exploration of hardware requirements; how we would deliver these experiences across the whole spectrum of devices from hand-held phones all the way up to very rich immersive clients. We have other groups working on the more consumer applications. So we have one group working on a project called ‘magic mirror’ which is an attempt to – for the fashion industry to be able to sell clothing more easily – where you can try on the clothes virtually, see how they fit – both get a better feel for what you are buying in an online experience, but also decrease the number of times that you have to send something back because it just isn’t the right one. It’s nice to go into a store where you can put something on and you have a very realistic understanding of how it fits and what it’s going to look like on you – but that’s hard to reproduce in an online environment, so that’s one of the applications we’re working on.

The Exascale Report: So, you’ve been working on this for how long?

Bowman: Basically since that 2008-2009 timeframe, we’ve been doing a variety of things to set the stage – establishing and understanding the common technologies, working with some of the Open Source projects – Intel’s had a long history of working in the visual computing space as well. Recently we funded two academic centers to support additional research in the area – the visual computing institute in Saarbrucken, Germany and the Science and Technology center for visual computing that’s hosted at Stanford.

The Exascale Report: Computing resources required. I know it’s not a product yet – you are still in a research agenda, but how could you describe or what would you anticipate – when this becomes practical and organizations start to adopt this type of technology – is this a supercomputing application?

Bowman: The short answer is no. Our goal in this has been to allow you to apply the right amount of resources so if you have a high-end simulation with very high expectations for immersiveness and realism, that’s going to require a large amount of compute resources. If your expectation is that you want to focus on social interaction and the detail in the immersion is not particularly important, then that requires a lot less resources. This tends to be an application that’s very – kind of partitioned between cloud services that are managing the simulation and some capabilities on the client. And one of our objectives in this is to allow the entire spectrum of clients to be able to connect to and have some experience in these applications. And the experience that you would expect to have on a phone is not the same as what you would expect to have on a high-end desktop machine.

The Exascale Report: So what do you see Mic as some of the barriers to achieving your goal? What are some of the things that might hold you back in this research area?

Bowman: At this point, the biggest barrier is really the way that people interact with these applications. Our entire computing paradigm is really two-dimensional. We have flat screens, we have keyboards – even when we’re playing a 3D game, we’re using a mouse and arrow keys and things like that – a mouse and a keyboard in order to interact with it. And we sort of figure out how to do that, but it’s not nearly as intuitive as, for example, the gesture interfaces that we now have for tablets – where we seem to have figured out the right set of things to interact with those devices. With the emergence of things like depth cameras and better understanding of non-touch interfaces, there’s some real hope that we can start going to building interfaces that are more naturally 3D. One of our projects that we have in the lab allows us to connect your phone as a very rich sensor platform to an application running on a high-end client, and so the phone essentially acts as a 3D mouse – so I can swipe it in the air, I can jiggle it, I can move it forward and backward and it becomes a navigation device that I use for the interaction. So even something as simple as that now gives me a much richer set of interactions that I can have in this space.

The Exascale Report: So can you give us a summary of some of the milestones you are most proud of – on what you guys have accomplished with this?

Bowman: Well I can say in general – in the community – there really are some viable, open source platforms that are emerging. We work with OpenSimulator as the platform for our work, but there are others as well that are very good at reducing the entry cost and allowing people to start experimenting. We have a number of small universities that we work with that are now experimenting in this space, and the barrier to entry is low enough that they can just try things without too much investment. So the success of that Open Source platform I think is one. With regards to our research in particular, we have at Intel been able to demonstrate order of magnitude improvements in kind of the immersiveness of the environments, of the interaction of the people, we now have demonstrations that we run with literally a thousand people being able to interact, so that opens the door for some of these things like the virtual Gettysburg where we can do full scale reenactments in those environments. It opens the door for some of the Army Research training applications where they want to do fairly large sets of training for company-level or hundred-plus people interactions.

The Exascale Report: Can you give us a look into what we might expect over the next year or so in terms of new milestones?

Bowman: The two big things that I think we need to address at some level in this that 3D interaction – how we do the interaction. And again, we have so much new technology that allows us to redefine interfaces that I think you are going to see the emergence of much better interfaces for interacting, whether it’s in the game space, where they are highly motivated to get good interactions, or in some of these other more social and virtual environments. The other one that I think you’ll see is again, targeting the more broad spectrum of client devices where you put the different components of the game. Can we do cloud-based gaming for low-end devices, where we don’t have the graphic capability – we don’t have the power to do it – where we can do much richer environments with the top-end.

The Exascale Report: So a question I can’t seem to answer in my own mind, how would you define the difference between gaming and simulation?

Bowman: It used to be easy to understand the difference between them but it’s not so much anymore. We’ve always had sort of AI characters – and we’ve always had physical simulation and other things in games so that, for example, when you shoot an arrow – the arrow behaves in the right way. You know, we’ve always had some level of simulation in those games. But the more realistic we can get in the behaviors of the characters and the environment, the more the game catches you. I think what we’re seeing is different kinds of simulation being brought to bear, and an example is the Water Wars simulation, where we’re taking – call it a very scientific simulation for hydrology management – how much water is going to be flowing through the river at this point five years from now, given these environmental conditions. And we’re taking a simulation like that and making it the core of a game where we can interact and make decisions – policy decisions – and use that simulation to actually see the long term effect of the decisions that we’re making. The transition that’s happening is not that there’s simulations and games where it didn’t happen before, I think the transition that’s happening is that it’s easy for us to intuit the results of the simulation when we add it to some kind of gaming environment where we’re interacting more richly with the data – and with the people.

The Exascale Report: The entire global HPC community has drawn such a beam on exascale – it’s in every conversation that comes up. Does exascale make it much more difficult for something like this to come to fruition, or does this become a stepping stone that helps us to understand how to scale applications into exascale? How do you put this into the conversation of exascale?

Bowman: To put it into that conversation – so the simulations themselves are going to have their own development trajectories. The places where we see the intersection with 3D web and exascale is really in things like the cloth simulation that we talked about today. That’s really hard. And the more compute power we can apply to it, in order to get more realistic behavior – just being able to differentiate between the weight of the cotton that goes into a fabric vs. silk, or vs. a nylon, and understanding and simulating those accurately allows us to get a better feel for what a garment is going to look like when we’re really wearing it. So even things like that have these realism requirements that are going to drive a significant amount of computation. And so, it’s really in those intersections that as we get more and more compute power and are able to do more and more realistic simulations – it opens up entirely new opportunities for consumer applications.

For more information on Intel Labs, please see: http://www.intel.com/content/www/us/en/research/intel-research.html

Listen to the podcast here

Download a PDF version
For related stories, visit The Exascale Report Archives.