Sign up for our newsletter and get the latest HPC news and analysis.

Ultrascale visualization

My pal Randall over at VizWorld.com wrote up a detailed review of an article covering a recent experiment to run the visualization application VisIt on datasets with 500 billion to 2 trillion grid points. The article comes from this month’s issue of the IEEE Computer Graphics & Applications

So, the paper is primarily about the techniques, trials, and successes of “Ultrascale Visualization” across different architectures, and that it does fantastically well.  Their test setup is to simply extract and render an isosurface (Marching Cubes) of a very large dataset, which winds up being the Supernova Simulation.  They originally had hoped to use Volume Rendering (which is the content of many of the snazzy images shown regarding the work), but the opportunistic design of these computers meant that by the time they had their Volume Rendering algorithm ready to go, their window of opportunity on some of the HPC’s had passed.

The article (and the review) are interesting for their illumination of all the little details that have to be finagled to make something like this work. From the paper itself

Our results demonstrate that pure parallelism does scale but is only as good as its supporting I/O infrastructure. We successfully visualized up to four trillion cells on diverse architectures with production visualization software. The supercomputers we used were “underpowered,” in that the current simulation codes on these machines produce meshes far smaller than a trillion cells. They were appropriately sized, however, when considering the rule of thumb that the visualization task should get 10 percent of the simulation task’s resources and assuming our trillion-cell mesh represents the simulation of a hypothetical 160,000-core machine.

Resource Links: