How GenAI Is Changing Work at a National Lab

At science organizations like national laboratories, the use of generative AI has the potential to accelerate scientific discovery in critical areas. But with new tools come new questions:

  • How can science organizations implement genAI responsibly?
  • How are employees across different roles using genAI in their daily work?

A recent study by the University of Chicago and Argonne National Laboratory examines generative AI tools — specifically large language models (LLMs) — within a national lab setting. The study not only highlights AI’s potential to enhance productivity, but also emphasizes the need to address such areas such as privacy, security and transparency.

Through surveys and interviews, the researchers studied how Argonne uses LLMs — and their future use — to generate content and automate workflows. The study also tracked the early adoption of Argo, the lab’s internal LLM interface released in 2024.

Argonne and Argo

Argonne said its organizational structure paired with Argo made the lab an ideal environment for the study. Its workforce includes science and engineering workers as well as employees in such areas as human resources, facilities and finance.

Kelly Wagman

The researchers found that employees primarily used genAI as a copilot and as a workflow agent. As a copilot, the AI works alongside the user, helping with tasks like writing code, structuring text or tweaking the tone of an email. For the most part, employees are currently sticking to tasks where they can easily check the AI’s work. In the future, employees reported envisioning using copilots to extract insights from large amounts of text, such as scientific literature or survey data.

As a workflow agent, AI is used to automate complex tasks, which it performs mostly on its own. Around a quarter of the survey’s open-ended responses — split evenly between operations and science workers — mentioned workflow automation, but the types of workflows differed between the two groups. For example, operations workers used AI to automate processes like searching databases or tracking projects. Scientists reported automating workflows for processing, analyzing and visualizing data.

Science is an area where human-machine collaboration can lead to significant breakthroughs for society,” said Kelly Wagman, a Ph.D. student in computer science at the UChicago and lead author on the study. ​Both science and operations workers are crucial to the success of a laboratory, so we wanted to explore how each group engages with AI and where their needs align and diverge.”

While the study focused on a national lab, some of the findings can extend to other organizations, including universities, law firms and banks, which have varied user needs and similar cybersecurity challenges.

Argonne employees regularly work with sensitive data, including unpublished scientific results, controlled unclassified documents and proprietary information. In 2024, the lab launched Argo, which gives employees secure access to LLMs from OpenAI through an internal interface. Argo doesn’t store or share user data, which Argonne said makes it a more secure alternative to ChatGPT and other commercial tools.

Argo was the first internal genAI interface to be deployed at a national laboratory. For several months after Argo’s launch, the researchers tracked how it was used across the lab. Analysis revealed a small but growing user base of both science and operations workers.

Possibilities and Risks

While generative AI presents exciting opportunities, the researchers also emphasize the importance of thoughtful integration of these tools to manage organizational risks and address employee concerns.

The study found that employees were significantly concerned about generative AI’s reliability and its tendency to hallucinate. Other concerns included data privacy and security, overreliance on AI, potential impacts on hiring and implications for scientific publishing and citation.

To promote the appropriate use of generative AI, the researchers recommend that organizations proactively manage security risks, set clear policies and offer employee training.

Without clear guidelines, there will be a lot of variability in what people think is acceptable,” said Marshini Chetty, a professor of computer science and leader of the Amyoli Internet Research Lab at the university. ​Organizations can also reduce security risks by helping people understand what happens with their data when they use both internal and external tools — Who can access the data? What is the tool doing with it?”

At Argonne, almost 1,600 employees have attended the laboratory’s generative AI training sessions. These sessions introduce employees to Argo and generative AI and provide guidance for appropriate use.

Science often involves very bespoke workflows with many steps. People are finding that with LLMs, they can create the glue to link these processes together,” said Wagman. ​This is just the beginning of more complicated automated workflows for science.”

Generative AI technology is new and rapidly evolving, so it’s hard to anticipate exactly how people will incorporate it into their work until they start using it. This study provided valuable feedback that is informing the next iterations in Argo’s development,” said Argonne software engineer Matthew Dearing, whose team develops AI tools to support the laboratory’s mission.

Dearing, who holds a joint appointment at the UChicago, collaborated on the study with Wagman and Chetty.

We knew that if people were going to get comfortable with Argo, it wasn’t going to happen on its own,” said Dearing. ​Argonne is leading the way in providing generative AI tools and shaping how they are integrated responsibly at national laboratories.”

On April 26, the team presented their results at the 2025 Association of Computing Machinery CHI Conference on Human Factors in Computing Systems in Japan.

This research was funded by the UChicago Data Science Institute’s AI+Science Research Initiative.