A team of researchers from the Georgia Institute of Technology conducted the largest-ever computational fluid dynamics, or CFD, simulation of high-speed compressible fluid flows on the Frontier exascale-class supercomputer, according to Oak Ridge National Laboratory, which houses Frontier.
The research team applied a new computational technique called information geometric regularization, or IGR. They combined it with a unified CPU-GPU memory approach — optimizing memory usage between traditional computer processors (CPUs) and graphics processors (GPUs) — to attain new levels of performance in CFD.




