Wagering Against Exascale

Print Friendly, PDF & Email

Over at ExtremeTech, Joel Hruska writes that the daunting challenges of achieving exascale compute levels by the end of the decade were brought home recently in a presentation by Horst Simon, the Deputy Director at NERSC. In fact, Simon has wagered $2000 of his own money that we wont get there by 2020.

But here’s the thing: What if the focus on “exascale” is actually the wrong way to look at the problem?

FLOPS has persisted as a metric in supercomputing even as core counts and system density has risen, but the peak performance of a supercomputer may be a poor measure of its usefulness. The ability to efficiently utilize a subset of the system’s total performance capability is extremely important. In the long term, FLOPS are easier than moving data across nodes. Taking advantage of parallelism becomes even more important. Keeping data local is a better way to save power than spreading the workload across nodes, because as node counts rise, concurrency consumes an increasing percentage of total system power.

Read the Full Story or Download the slides