Rock Stars of HPC: Ricky Kendall

Print Friendly, PDF & Email

Ricky KendallThis series is about the men and women who are changing the way the HPC community develops, deploys, and operates the supercomputers we build on behalf of scientists and engineers around the world and Ricky Kendall, this month’s HPC Rock Star, is at the center of enabling science on the largest computing systems the world has ever seen.

Kendall is the leader of the scientific computing group at one of the nation’s leading HPC facilities, the National Center for Computational Science at Oak Ridge National Laboratory, where he and his team help users get the most out of what is today the largest supercomputer in the world. But this isn’t a theoretical task for Kendall — he comes from the large scale application development trenches himself, having been part of the team that started NWChem, one of the leading community codes for computational chemistry. Kendall’s accomplishments put him in the center of the computational community, in a role we used to call a computational engineer when I was in graduate school. As he puts it, “The chemistry community often sees me as a computer jock, and the computer science community sees me as an applications person.”

Kendall is the kind of leader that the HPC community needs most: someone committed to making sure that the systems our community builds end up helping to move the world forward.

Ricky Kendall started his career as a staff scientist at Pacific Northwest National Laboratory where he was responsible for the development of computational chemistry in support of the waste remediation activities of the Environmental Molecular Sciences Laboratory (EMSL). Part of this work included development that eventually became the community chemistry code NWChem, an application that is in wide use today for a variety of problems of interest to the science and engineering communities.

But Kendall wasn’t solely focused on computational code development. During his time at PNNL he continued to develop his desire to help prepare the next generation of computational professionals by serving as an adjunct lecturer at Washington State University and working with high school students. Kendall says that the challenges were fun and rewarding, both for him and the students. “You learn a great deal when you have to explain things so that students can understand the topic,” he says. “You also learn what you thought was true may not be quite right.”

After leaving PNNL, Kendall headed to Ames Laboratory in Iowa where he served as a computational scientist. He took the teaching bug with him when he moved, and added an adjunct associate professorship at Iowa State University to his regular duties at the lab. In addition to developing his own understanding of the field, Kendall says that he also had the sense that he was filling a real need in our community. “At WSU and Iowa State University, the courses I mostly taught involved programming. I found that programming skills are not stressed by the CS curriculum at many schools, and felt I wanted to help students get those practical skills.” He also contributed to the strength of the HPC community directly by developing an HPC course at ISU. The course was geared toward learning different parallel programming models, which he says the students found challenging and useful, and which ultimately included students from aerospace engineering, chemistry, physics, and other departments across the campus.

As he was pursuing his “regular” job and keeping up with his teaching duties Kendall also found time to publish, and the list of his publications is impressive not only for sheer quantity, but for the diversity of topics which range from low level performance measurement to application and algorithm development. Kendall credits this unusual diversity with values instilled by his graduate advisor, “My advisor felt that students should have skills in both applications and theory and code development,” he explains, “and I found that I really liked doing the code development in addition to the application work. I find it rewarding being able to use a code I helped develop on the applications I’m interested in, knowing that the development was driven by the needs of the application space.”

It takes a village

Ricky with JaguarToday at Oak Ridge, Kendall serves at the group leader for the Scientific Computing Group, a role that he describes as “definitely on the enablement side” of the computational spectrum. “My team’s focus is to help our users get the most out of the resources we have and plan to have at the facility. I have an amazingly talented team that does this job and we have been reasonably successful in integrating with our user community and getting codes to scale to the size of our Jaguar system.”

Kendall’s experiences with both education and mentoring and large scale application development make him uniquely suited to helping ORNL’s computational communities make effective use of systems like Jaguar, currently ranked #1 on the Top500 list of the world’s largest supercomputers. “For most of my career,” he says, “I have sat on the fence that separates applications guys and developers. The chemistry community often sees me as a computer jock, and the computer science community see’s me as an applications person.”

But this perspective is extremely useful, Kendall explains, because leadership-scale science is a multi-disciplinary effort. “Many of the most successful applications on leadership computing facilities today have multidisciplinary teams. These teams have someone that understands the theory being used, the mathematics, the algorithms, computational science at scale, programming skills and core computer science skills. All are needed to make the application work on the leadership systems and be potential candidates for future systems. The successful applications plan for change and have ways to deal with how hardware evolves.”

Two handshakes of separation

Multi-disciplinary teams of this kind are really communities, and even a quick glance at Kendall’s resume reveals a commitment to the HPC community that goes beyond teaching and education. “The best advice I got when starting down the development path,” he continues, “was to steal what you can and only write the parts you have to. I think that still holds. The trick is to make yourself aware of what others are doing and how you might leverage it.”

For Kendall a key part of being aware of what others are doing is involvement in community events like the SC conference series, for which he is serving as the Technical Program Chair as part of SC10. This is a huge job, and represents a significant commitment of time and energy above and beyond one’s day job and the rest of your life. I asked what drives him to put so much energy into what is, essentially, an optional activity. “There are many reasons to be involved in community efforts,” Kendall explains. “One is to help spread the word about the things you are doing as a scientist and as part of an organization. Another is the networking aspects of such involvement: you are no more than two handshakes away from anyone in the HPC community, and it’s important to make those connections for yourself, your students and your organization. In terms of building an organization and keeping it healthy, recruiting staff is an incredibly time consuming and interactive task. By being involved in such efforts as the SC conference you get a good feel of the overall community and help your recruiting efforts. You also learn what others are doing and can potentially leverage other activities in the community with your own scientific missions and goals. These kinds of grass roots connections can lead to collaborative efforts and new areas of research.”


As an educator, community leader, and technologist Kendall has already helped move the HPC community through many transition points. What does he see as our next significant challenges? “Software is one of the biggest challenges we face,” he explains. “Exascale software is likely not going to look the way applications look today. We are at a turning point, and where we go next is an open question. In general though to get to the exaflops scale we are going to have to focus more on programming in the node. The path forward here is getting more powerful nodes and lots of them. This means that as a community we will have to deal with multiple levels of concurrency and make that all work. This means that we will have to realistically bring together some of the old vector techniques, invent new many core techniques, and utilize the scale of the nodes all at the same time. There is no free lunch here, and there needs to be a lot of diversity available to the community to try different techniques and algorithms.”

Getting this kind of diversity into the efforts we pursue on the way to exascale is going to mean adding room in the process for failure, with many incremental steps and missteps on the way to the final destination. “I often describe scaling codes to large core counts as playing ‘whack-a-mole,’ because you find and eliminate a bottleneck to scaling and something else pops its head up. The path to exascale is going to be a multidimensional whack-a-mole with really ugly moles! Its going to be a lot of work but there will likely be some fun rolled in along the way as well.”

As long as I’m useful

Kendall describes his role today as a “glue person” helping to join applications and computer scientists on teams that do some of the most advanced computational simulations in the world. This is a role that Kendall relishes, incorporating staff development and mentoring along with a deep understanding of technology and applications domains. “I decided to take the job at ORNL to help build the leadership computing facility and my team along with the rest of the division and our sponsors have been able to deliver on that front. We have the #1 system on the Top500 list, and we were able to work with our users who got 3 applications doing science at above 1 Petaflops of performance. I enjoy the enabler role and will continue in that vein as long as I’m useful.”