Rock Stars of HPC: James Phillips

Print Friendly, PDF & Email

This Rock Stars of HPC series is about the men and women who are changing the way the HPC community develops, deploys, and operates the supercomputers, and the social and economic impact of their discoveries.

James Phillips, Senior Research Programmer at the University of Illinois.

Recipient of a Gordon Bell Award in 2002, James Phillips has been a full-time research programmer for almost 20 years. Since 1998, he has been the lead developer of NAMD, a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems that scales beyond 200,000 cores, and is undoubtedly a Rock Star of HPC.

insideHPC: What first led you to HPC?

James Phillips: It all began in the summer of ‘91 during an internship on a Research Experiences for Undergrads program at the Minnesota Supercomputer Institute. At the time I was able to do some work on the Cray machines there, but really the excitement came from exposure to that whole environment and the concept that if you want to do computational science, there are things you can do on supercomputers that you just can’t do any other way.

It opened up a view of the larger world of what high-end computing was, and how as an individual you could really accomplish something and make a difference in a relatively short period of time. That was appealing, and computational science has allowed me to bring my interests in physics, math and computing together.

Of course, a turning point was in 1993 when I went to graduate school at the University of Illinois at Urbana-Champaign, and was looking for an advisor. The advice I was given was that there weren’t many jobs in the Physics field of condensed matter that I was interested in, so I should do something that combined computers and biology.

insideHPC: That advice led you to Professor Klaus Schulten – what impact did he have on your career?

When Schulten came here from Germany he had a couple of grad students and a home-built parallel computer that they used to run the first simulation of a membrane back in early 1990s. He was very much dedicated to the idea that we can use parallel computing to do science and he largely dedicated his life to running this place. In fact he rarely slept – I remember at one point I was working on something and I sent him an email at 3am to try and impress him, and he wrote back.

His dream was to simulate a cell, and this is arguably crazy because it’s too big and the larger it is the slower it runs. It makes no sense why anyone would want to do this. But what Schulten proved over his entire career is that whenever you look at something at a larger scale and you really put the thing together in the full atomic detail, you learn something just from that exercise. I’ve remembered that lesson.

insideHPC: What a wonderful beginning. What would you say has been your biggest accomplishment?

Phillips: Hopefully, it’s in the future but if I had to pick one it would be when NAMD was listed as a target application for the NCSA supercomputer, Blue Waters. When we got access to the supercomputer for the early science runs on the HIV virus capsid, we found the hardware, software and science on the experimental side all coming together at the same time to make very fast progress on something that wouldn’t have been possible on any other machine.

The simulations that were done on the first early science machine were of a tubular version of the HIV capsid, and we tried to submit those publications to Nature. In the meantime, we actually got the full capsid and were running that on the Blue Waters machine. This was at a point when the machine was just coming online and needed to make a splash to justify why it was important, and why we had spent all this money on it. Our work meant that we were able to offer a hard highly impactful science case to make those justifications. Bringing all of those aspects together in one project was very rewarding and was definitely a high point in terms of impacts.

insideHPC: What do you think has made the greatest impact in HPC?

Phillips: I’ve given this question a lot of thought and I have to say Linux. Back at the University of Illinois at Urbana-Champaign, Schulten’s students had figured out that they could buy and own their own machine for a tenth of the cost that the NCSA was going to spend, and still get equivalent or higher performance. The first cluster put together in the group was of HP 735 workstations with a 100mb ATM network. The hardware was tied to the operating system and we had HPs running HP-UX and SGIs running IRIX, and NCSA put a Windows cluster together.

Because of Linux, a whole bunch of redundant software and software issues got coalesced into this one operating system running across multiple platforms, and as soon as you do that through Open Source, solving the software problems no longer becomes a competition between vendors who are trying to sell hardware; it becomes a community effort. Linux made the HPC software ecosystem universal and the same, more or less, as what you can have on a workstation on your desk. And without Linux we wouldn’t have cloud computing, with its massive parallelism and virtual machines. HPC would be a different and much more painful place to work today.

insideHPC: Next month, you’ll be delivering a talk on Petascale Molecular Dynamics Simulations from Titan to Summit at the Nvidia GTC conference. What else do you think will get people’s attention?

Hopefully it’s something that I don’t know anything about yet, but I would like details on Volta as the Oak Ridge National Laboratory’s new Summit supercomputer has been designed to launch with Volta GPUs. Right now they have a smaller machine using the Pascal GPUs and while they are very fast one thing I learned from the Blue Waters development process is that you can anticipate, simulate and plan all you want but you get more done in one week on the actual machine than you do in six months of preparation.

Ahead of time you’re just guessing what you think the problems are going to be, but when you get on the machine you know what you have to address. So the sooner that we get information about Volta and ideally access to the first Volta GPUs then the sooner we can really begin preparing for the large installation. That’s what I’m hoping to hear more about and I can’t think of anything more important than that.

James Phillips is Senior Research Programmer at the University of Illinois.

At GTC 2017, Phillips will present a talk entitled ‘Petascale Molecular Dynamics Simulations from Titan to Summit’. He will discuss the opportunities and pitfalls of taking GPU computing to the petascale, along with recent NAMD performance advances and early results from the Summit Power8+/P100 “Minsky” development cluster.