Sign up for our newsletter and get the latest big data news and analysis.

Sandia: New 3D-imaging Workflow Could Certify HPC Materials Simulations

ALBUQUERQUE, N.M. — Sandia National Laboratories researchers have announced they have created a method of processing 3D images for computer simulations that could have beneficial implications for several industries, including health care, manufacturing and electric vehicles.

At Sandia, the method could prove vital in certifying the credibility of high-performance computer (HPC) simulations used in determining the effectiveness of various materials for weapons programs and other efforts, said Scott A. Roberts, Sandia’s principal investigator on the project. Sandia can also use the new 3D-imaging workflow to test and optimize batteries used for large-scale energy storage and in vehicles.

“It’s really consistent with Sandia’s mission to do credible, high-consequence computer simulation,” he said. “We don’t want to just give you an answer and say, ‘trust us.’ We’re going to say, ‘here’s our answer and here’s how confident we are in that answer,’ so that you can make informed decisions.”

The researchers shared the new workflow, dubbed by the team as EQUIPS for Efficient Quantification of Uncertainty in Image-based Physics Simulation, in a paper published today in the journal Nature Communications.

“This workflow leads to more reliable results by exploring the effect that ambiguous object boundaries in a scanned image have in simulations,” said Michael Krygier, a Sandia postdoctoral appointee and lead author on the paper. “Instead of using one interpretation of that boundary, we’re suggesting you need to perform simulations using different interpretations of the boundary to reach a more informed decision.”

EQUIPS can use machine learning to quantify the uncertainty in how an image is drawn for 3D computer simulations. By giving a range of uncertainty, the workflow allows decision-makers to consider best- and worst-case outcomes, Roberts said.

Workflow EQUIPS

Think of a doctor examining a CT scan to create a cancer treatment plan. That scan can be rendered into a 3D image, which can then be used in a computer simulation to create a radiation dose that will efficiently treat a tumor without unnecessarily damaging surrounding tissue. Normally, the simulation would produce one result because the 3D image was rendered once, said Carianne Martinez, a Sandia computer scientist.

But, drawing object boundaries in a scan can be difficult and there is more than one sensible way to do so, she said. “CT scans aren’t perfect images. It can be hard to see boundaries in some of these images.”

Humans and machines will draw different but reasonable interpretations of the tumor’s size and shape from those blurry images, Krygier said.

Using the EQUIPS workflow, which can use machine learning to automate the drawing process, the 3D image is rendered into many viable variations showing size and location of a potential tumor. Those different renderings will produce a range of different simulation outcomes, Martinez said. Instead of one answer, the doctor will have a range of prognoses to consider that can affect risk assessments and treatment decisions, be they chemotherapy or surgery.

“When you’re working with real-world data there is not a single-point solution,” Roberts said. “If I want to be really confident in an answer, I need to understand that the value can be anywhere between two points, and I’m going to make decisions based on knowing it’s somewhere in this range not just thinking it’s at one point.”

The EQUIPS team has made the source code and a working example of the new workflow available online for other researchers and programmers. Bayesian Convolutional Neural Network source code is available here and the Monte Carlo Dropout Network source code here. Both are on GitHub. A python Jupyter notebook demonstrating the entire EQUIPS workflow on a simple manufactured image is available here.

For the remainder of this article, go to:

Leave a Comment


Resource Links: