Supercomputer Visuals Without GPUs

Print Friendly, PDF & Email

Those of you actively performing research using high performance computing platforms know that computing is only half the battle.  More often than not, visualizing that data is half the battle.  This usually requires a specific set of hardware [large or small] designed to mine some sort of useful information out of the mass of data.  The folks at Argonne National Laboratory are working on a solution.  Rather than moving massive output datasets to specialized graphics systems, use the core compute to do the post processing and visualization.

Tom Peterka, a computer scientist at Argonne National Lab has written software for their IBM Blue Gene/P that allows him to perform visualization work on the host core. His solution eliminates the need to move data off a single system.

It allows us to [visualize experiments] in a place that’s closer to where data reside–on the same machine,” says Peterka.

Peterka’s test data was obtained from John Blondin of NC State and Anthony Mezzacappa of ORNL.  The data represents thirty sequential steps in the simulated explosive death of a star.  So far, Peterka’s largest test with the data crested 89 billion voxels [3D pixels].  The tests resulted in 2D images of 4,096 pixels per side.  Overall, the processing required 32,768 of Intrepid’s 163,840 cores. However, as they grow, I/O becomes a problem.

The bigger we go, the more the problem is bounded by [input/output speeds],” says Peterka.

Very interesting technology.  If I read the technology landscape today, we’re moving the core compute to GPUs and the visualization away from GPUs.  For more info, read the full article here at TechnologyReview.