Big Data from Exascale will Pose Big Challenges for Visualization

Print Friendly, PDF & Email

When exascale computers begin calculating someday at a billion, billion operations each second, will scientists be tempted to hold up the machine deal with the output? Though he’s only starting his five-year research program, Hank Childs from LBNL expects his Data Exploration at the Exascale project will focus on creating techniques that avoid regularly saving the full simulation for visualization and analysis.

If somebody hands you a petabyte or, in the future, an exabyte, how do you load that much data from disk, apply an algorithm to it and produce a result?” Childs asks. “The second challenge is complexity. You only have about a million pixels on the screen – not many more than in your eye – so you have to do a million-to-one reduction of which data points make it onto the screen.”

Running at exascale will make Child’s task even more complicated: Like the simulation itself, data processing will have to be executed in a billion-way concurrent environment while minimizing information flow to trim power costs. Read the Full Story.