Sign up for our newsletter and get the latest HPC news and analysis.

Reed's teraflops-year code compilation

Dan Reed has a summary on his blog of this year’s DOE’s SOS11 workshop, held in Key West. This year’s workshop theme was “Challenges of Sustained Petascale Computation.”

The post is interesting, and Dr. Reed’s perspective is always valuable. Two comments jumped out at me

In the vendor session, Intel discussed its 80-core teraflop test chip and some of the electrical signaling issues it was intended to test. Everyone at the workshop (and at Microsoft Manycore Computing Workshop) agreed that we would see hundred-core commodity chips by the end of the decade. Looking further, one can see thousand-core chips coming.

And then this comment suggesting that as machines get more complex we’ll need to rely more on the machines to help us get performance out of our software:

What I am really arguing is that we need to rethink aggressive machine optimization, virtualization and abstraction. What’s wrong with devoting a teraflop-year to large-scale code optimization? I don’t just mean peephole optimization or interprocedural analysis. Think about genetic programming, evolutionary algorithms, feedback-directed optimization, multiple objective code optimization, redundancy for fault tolerance and other techniques that assemble functionality from building blocks. Why have we come to believe that compilation times should be measurable with a stopwatch rather than a sundial?

A sundial! Good stuff.

Trackbacks

  1. […] Dan Reed’s teraflops year compilation (story) […]

  2. […] FLOPS are, to a first approximation, free. We need to be developing software and conceptual frameworks that let computers do more of the heavy lifting, even if our first steps are only brute force solutions. It’s past time to get a move on with Professor Reed’s teraflops year compilation. […]

Resource Links: