Paving the Way for Future Crash Simulation

Print Friendly, PDF & Email
Greg Clifford

Greg Clifford

Over at the Cray Blog, Greg Clifford writes that even though many leading-edge automotive HPC environments already have petaflop-size compute capacity, that won’t be enough to meet crash/safety simulation requirements in the near future.

The requirement for both application scaling (capability computing) and system throughput (capacity computing) continues to grow. The “THUMS” human body model has 1.8 million elements, and safety simulations of over 50 million elements are on the roadmap. Models of this size will require scaling to thousands of cores just to maintain the current turnaround time. The introduction of new materials, including aluminum, composites and plastics, means more simulations are required to explore the design space and account for variability in material properties. Using average material properties can predict an adequate design, but an unfortunate combination of material variability can result in a failed certification test. Hence the increased requirement for stochastic simulation methods to ensure robust design. This in turn will require dozens of separate runs for a given design and a significant increase in compute capacity — but that’s a small cost compared to the impact of reworking the design of a new vehicle.

Read the Full Story.