Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NVIDIA Simplifies Building Containers for HPC Applications

In this video, CJ Newburn from NVIDIA describes how developers can quickly containerize their applications and how users can benefit from running their workloads with containers from the NVIDIA GPU Cloud.

“A container essentially creates a self contained environment. Your application lives in that container along with everything the application depends on, so the whole bundle is self contained.”

That solves a couple problems, says CJ:

One of the key problems that that deals with is that if you had an application and you sent it to me, I could spend a long time trying to figure out everything I need to actually make that run with all those dependences. And I might not have them so I might need to go get them one by one. And the assumptions that you made might not jive with my understanding of those. Even if I’m an expert, I might spend a couple days trying to do that. And in the end, I might still not run that application in the way that you intended. Instead, you can essentially deliver to me a container that basically runs itself, that has all of the dependences, and runs itself with the environment that you intended it to run.”

Transcript:

insideHPC: Hi, I’m Rich with insideHPC. We’re here at the GPU technology conference in Silicon Valley. And I’m here with CJ from NVIDIA. CJ, we’ve been learning a lot about containers recently, but I think we should start at the beginning. Why would I want to use containers for doing my work load?

CJ Newburn: Sure. A container essentially creates a self contained environment. Your application lives in that, everything the application depends on that, and that whole bundle that’s self contained can get delivered to a platform to run there.

That solves a couple problems. One of the key problems that that deals with is that if you had an application and you sent it to me, I could spend a long time trying to figure out everything I need to actually make that run, all those dependences. And I might not have them so I might need to go get them one by one. And the assumptions that you made might not jive with my understanding of those. I might have to fight through lots of documentation that might take me, even if I’m an expert, I might spend a couple days trying to do that. And in the end, I might still not run that application in the way that you intended.

Instead, you can essentially deliver to me a container that basically runs itself, that has all of the dependences, and runs itself with the environment that you intended it to run. So if, for example, you’re trying to– you’ve given it to me, I use it in a way you didn’t expect, you now, unfortunately, need to debug this. You actually have control over how that was set up, as opposed to you scratching your head and saying, “Now, what did you do with that beautiful application and how did you abuse it?”

You essentially have a great deal of control over delivering the value. And because that became so much easier for me to take, and build, and deploy, the time that it takes for me to do that is so much lower then I’m much more likely to take all of your latest release. So all that effort that you put in to making it beautiful, to making it optimized, to make it run, to make the best use of the platform. For example, using our latest GPU features and so on. You now get to be able to make that available to me so that I’m much more likely to use it.

insideHPC: Do you have any tools available for developers to containerize their HPC applications?

CJ Newburn: Yes, we do. NVIDIA is now offering a script as part of an open source project called HPC Container Maker, or HPCCM (https://github.com/NVIDIA/hpc-container-maker) that makes it easy for developers to select the ingredients they want to go into a container, to provide those ingredients in an optimized way using best-known recipes.  HPCCM takes an HPCCM recipe file in the form of a Python program that makes calls to insert primitives and building blocks as input, and leverages a set of recipes to produce Dockerfiles or Singularity recipe files that can be used to build containers that are optimized for performance, size and reuse.  A variety of OEMs, data centers and ingredient providers have found it to be simple, extensible, forward looking and very productive to use.  In the end, HPCCM makes it easier for developers to create optimized containers, and relieves data center admins from the complexities of installing complicated applications.  Finally, since making best use of the entire software stack is a key part of overall value, HPCCM helps deliver the latest, best-optimized version of applications to end users.

insideHPC: Well, thanks for sharing that with us today. Sounds like containers are the way of the future.

Check out our insideHPC Events Calendar

Leave a Comment

*

Resource Links: