Why UIUC Built HPC Application Containers for NVIDIA GPU Cloud

Print Friendly, PDF & Email

In this video from the GPU Technology Conference, John Stone from the University of Illinois describes how container technology in the NVIDIA GPU Cloud help the University distribute accelerated applications for science and engineering.

“Containers are a way of packaging up an application and all of its dependencies in such a way that you can install them collectively on a cloud instance or a workstation or a compute node. And it doesn’t require the typical amount of system administration skills and involvement to put one of these containers on a machine. And within the container image in a manner that’s roughly similar to what you have in a virtual machine, the user can change anything they want. So you have an entire sort of operating system snapshot is what it looks like on the inside. So you can customize the layout of the file system and do all kinds of other things that would otherwise involve getting a lot of permission and cooperation, particularly in large computing installations.”

John Stone is the lead developer of VMD, a high performance molecular visualization tool used by researchers all over the world. His research interests include molecular visualization, GPU computing, parallel computing, ray tracing, haptics, virtual environments, and immersive visualization. Mr. Stone was inducted as an NVIDIA CUDA Fellow in 2010. In 2015 Mr. Stone joined the Khronos Group Advisory Panel for the Vulkan Graphics API. In 2017 Mr. Stone was awarded as an IBM Champion for Power for innovative thought leadership in the technical community. He also provides consulting services for projects involving computer graphics, GPU computing, and high performance computing.

Transcript:

insideHPC: Hi, I’m Rich with insideHPC. We’re at the GPU technology conference in Silicon Valley. And I’m here with John Stone from University of Illinois. John, thanks for having me here. But a question for you; I heard you deployed some software with containers on the NVIDIA GPU cloud. First of all, what are these containers and why would you want to do that?

John Stone: So containers are a way of packaging up an application and all of its dependencies in such a way that you can install them collectively on a cloud instance or a workstation or a compute node. And it doesn’t require the typical amount of system administration skills and involvement to put one of these containers on a machine. And within the container image in a manner that’s roughly similar to what you have in a virtual machine, the user can change anything they want. So you have an entire sort of operating system snapshot is what it looks like on the inside. So you can customize the layout of the file system and do all kinds of other things that would otherwise involve getting a lot of permission and cooperation, particularly in large computing installations.

insideHPC: Well, I understand you have a demo using these containers. Can you tell us about that?

John Stone: So what we’re showing here is two of our programs we have actually our NIH centers with three different software packages into the NGC container registry. One is NAMD which is a molecular dynamic simulation program, and that’s what’s running in this text window here as it prints timing information and so on. Another one is VMD which is a molecular visualization and analysis tool that I’m showing here. And the third is Lattice Microbes which simulates entire cells. So NAMD is doing a simulation of satellite tobacco mosaic virus which is shown here.

This is the NGC container image. It’s been pulled down to this– this is just a workstation and NVIDIA DGX station. So it has four Volta GPU’s. Basically, all they’ve done is they’ve pulled down a copy of NAMD, this is one of our standard NAMD benchmarks. They are basically running the container image on this workstation. And so you have what to the user would look like a little virtual machine in here. If you go and look at what’s going on in here, this looks like it’s a little separate computer, has its own file system image which is structured for the benefit of NAMD. And then what’s going on here is NAMD is communicated with VMD interactively in real time. And VMD is showing the atomic structure as it undergoes the simulation. So you can see the atoms wiggling around here. If we had a big enough computer, we’d actually be able to see the dynamics much more rapidly. But this is just a demo to show the technology.

insideHPC: Yeah. Well, kind of a wrap-up question; We’re in HPC, right? This is all about performance. Doesn’t this container technology slow things down?

John Stone: Not from what I’ve seen so far. Certainly, our applications have run fine. Part of the quality assurance process that we went through to get our applications in NGC involves running benchmarks to validate that the software is running at an expected level of performance and that it’s getting good utilization out of the GPU’s. And they test it on a bunch of different types of machines. So we ran on various kinds of DGX stations, single GPU machines, and so on. My original deployment test platform was actually the Amazon UC2 instance.

insideHPC: Well, John, sounds like these containers are going to help people focus on their science and not how to get them up and running on the big machines. And thanks for sharing this with us.

John Stone: Thank you very much.

Check out our insideHPC Events Calendar