insideHPC Vanguard: NNSA’s Si Hammond and the ‘Almost Impossible’

Si Hammond

In our continuing series on HPC-AI Vanguards, in which we recognize young members of the HPC-AI community showing potential to become industry leaders of tomorrow, we here profile Si Hammond, Federal Program Manager at the National Nuclear Security Administration (NNSA).

Hammond’s first involvement with HPC and AI occurred in 2005 when he was a master of engineering student at the University of Warwick in the UK. He graduated in 2011 and holds a PhD in HPC and performance analysis. He joined in 2011 the technical staff at Sandia National Laboratories, where his expertise grew to include numerous architectures, performance and optimization skills. He has been Federal Program Manager at NNSA since 2022. Hammond is highly regarded by colleagues throughout the HPC-AI community for his technical skills and natural style of mentoring and leadership.

An Interview with Si Hammond, Federal Program Manager, NNSA: Taking on the “Almost Impossible”

What is your passion related to your career path?

Increasing the performance of hardware and software so the community is collectively able to get more science done and increase its insight. Some of the best parts of my career have been working with inter-disciplinary teams to really dig in and develop solutions to what can seem, at the start, almost impossible problems.

Do you prefer working as an individual contributor or a team leader?

Sometimes a little of each. At times in my career, I can remember jumping out of bed early in the morning or working until really late at night trying to solve a particular performance problem or bug, I loved the challenge of understanding the problem deep enough to solve it. At other times in my career, I’ve enjoyed being part of amazing teams. The feeling of knowing you have someone to call or bring in to help support you whatever happens can make no challenge feel too hard.

Share with us an event you’ve been involved with that brought about an advance, a new insight, an innovation, a step forward in computer science or scientific research.

I think one of the biggest events I was involved directly in was the application lead for the Astra supercomputer deployment at Sandia (the world’s first PetaFLOP Arm system). I led the benchmarking and application porting team for that platform. It took work getting the initial ports through applications, system libraries, etc., and then validating all the performance we had projected.

Who or what has influenced you the most to help you advance your career path in this community?

I have been fortunate to have been surround by incredibly supportive people since I went to university. There are too many to name but what I learned from many of them was to try to focus on the real problems that users had. Over time, I’ve become a fan of developing initial solutions and then iterating to improve methods over time, and to not be worried at the start of a project that we don’t already know how we will get to the end – the journey itself is part of the learning experience.

What are your thoughts on how do we, the nation, build a stronger and deeper pipeline of talented and passionate HPC and AI professionals?

In my new role as a program manager, I like to talk about two things – mission and meaning. “Mission” is the bigger picture, the “what” are we trying to do, the “why” someone cares about our eventual outputs. I’ve constantly been surprised at how many people want to work in specific mission areas.

At NNSA, we have a mission to support national security, and many of the folks I work with really associate with what we do. As a nation we need to do a much better job of explaining to people what we do and how it affects daily life. HPC really does impact the “average” citizen in many ways but I’m not sure many people understand that. “Meaning”, for me, is identifying how everyone in our team fits into that picture, contributes to something more than the slice of the problem they work on. I draw on my time in the US Army with the phrase – “contributing to something bigger than any one person.”

If we get the environment right, I think its possible to bring together extremely diverse sets of opinions and approaches and solve genuinely difficult problems. HPC and AI professionals have so many options to help with amazing missions. Whether that’s fundamental science, solving physics or engineering problems, helping support national security or creating the new generation of medical advances, there is so much our community contributes to. In general, I think people are attracted to HPC and AI for the mission, but they stay in the community because the work and the career are meaningful – that’s something we all need to help build and support.

What does it take to be an effective leader in HPC and AI?

This is a great question – in part because I think I’m still trying to learn the answer to that myself. For me, some of the more inspirational leaders I have worked for have had a vision for where we wanted to go that wasn’t necessarily understood down to the finest detail. That enables teams of people to get behind the vision, find the solutions and to contribute, finding how they fit in. Good leaders will listen to lots of inputs but in the end, I’ve also appreciated leaders who have made crisp decisions (right or wrong) and worked with their teams to solve challenges along the way.

Like leadership in any discipline, accepting that there will be mistakes and that some outcomes won’t pan out is also a part of building great teams. Within the NNSA, one of the recommendations we have recently received from our National Academy review was allowing our research teams to take more risks and to be more aggressive, increasing the chance of failure. I take that feedback seriously because being on the leading edge of HPC and AI is going to require our teams to be at their best, pushing them hard and allowing us to take calculated risks to excel. While it will take a while for us to get there, I’d like to be a leader who helps deliver an environment like that – supportive but also with a call to our teams to aim high.

What is the biggest challenge you face in your current role?

The biggest challenge is how to balance many competing ideas, all of which have great merit. I’m reminded just how much more there is to explore in our community, and I’m often frustrated we can’t fund more projects. Prioritizing and deciding which projects and ideas to take forward is difficult. HPC and AI are changing so rapidly it’s also extremely difficult to build a stable research portfolio where anyone can develop a robust multi-year financial plan. While of course this is a challenge, it happens to correlate with probably one of the most exciting periods of change in our industry, so I like to remind myself that “with great challenges comes great opportunity.”

What changes do you see for the HPC / AI community in the next 5-10 years, and how do you see your own skills evolving during this time frame?

One of the biggest challenges is simply how to continue to provide huge increases in performance without going to insane levels of power consumption in future chips. This will impact operational expenditure, cooling, data center design etc., to the point that it will pose a substantial challenge to HPC and AI users.

From a purely practical perspective, power and cooling are going to create huge infrastructure challenges which are not trivial to solve and ultimately take many years of planning. On the technology front, working with the teams at NNSA has increased my interests in future novel hardware options, such as coarse-grained reconfigurable hardware, photonics, neuromorphic chips and even quantum. We are seeing some amazing results in each of these areas, which makes me think the coming decade will be a rich space of innovation for computational sciences.

I’m also optimistic that in the next 5-10 years that we will have addressed some of the challenges of using AI technologies alongside traditional HPC (probably not all) to provide dramatic acceleration for our science. During that time, I plan to try to keep current with many of the newest technologies so I can understand what our technical teams are telling me in project proposals etc. That continues to require a lot of reading time, working across the DOE labs, engaging with our academic partners and vendors etc., to understand challenges, opportunities and emerging ideas. It’s precisely the excitement of those which keeps me in the HPC community, there is virtually never a moment of boredom because the pace of change is relentless.

What is your view on the convergence and co-dependence of HPC and AI?

My impression is that we are still working out how this will evolve in the future.

I see AI as more of an addition to HPC workflows and tools today, helping us drive automation in complex design, simulation etc. The prospects here are incredible, and we are busy trying to bring this to the program I work on. Where I see more of a struggle in the future is whether AI will replace significant amounts of modeling/simulation. While there are initial papers and early results in these areas, most of the scientists I work with are nervous about even small decreases in predictive accuracy, especially for the high consequence work that DOE performs. It means the development of true AI surrogates replacing HPC is unlikely in the near term, but I think the combination of AI technology with existing HPC codes will provide vast improvements in productivity and much deeper analysis of results.

HPC is likely to need to adapt to an increasingly AI world, especially in the hardware domain where the huge sums of money being invested in AI technology will drive design in the future. However, many of the things that AI needs are also things that HPC needs, for instance, memory bandwidth for inferencing will help HPC codes too. One thing the HPC community has been great at for decades is adapting to the hardware trends of the day of delivering cost effective large-scale science. In many ways, AI will be a change like the move from vector supercomputers to commodity processors – significant but something we need to be confident we can do.

I do, however, also think that AI can learn from HPC too. Our community has helped develop a lot of the foundational technologies that large AI companies are using – for instance, scalable networking and collectives. AI companies also need the energy efficiency and algorithmic improvements that HPC researchers have worked on for decades, that’s something we can help with.

Do you believe science drives technology or technology drives science?

My experience has been that most computational scientists are able to find extremely novel ways to utilize whatever resources they can access. That’s probably a “technology drives science” perspective but it’s founded in seeing amazingly innovative colleagues figure out how to utilize new hardware functionality in each generation of machine we deployed at the NNSA and DOE.

Would you like to share anything about your personal life?

I love traveling – that’s good because I try to get out and visit our labs, plants and sites as much as I can. Nothing gets the excitement up more than meeting the folks who are working hard in our program. At home, I love to cook and bake, watch movies and I enjoy swimming and going for a run. Recently I completed my last enlistment for the Army National Guard, so now I’m enjoying being able to get up a little later on the weekends!