Latest Exascale Report Looks at Future of Debugging

Print Friendly, PDF & Email

Building an Exascale system is going to be tough enough, but imagine the challenge of optimizing and debugging code with millions of processing elements and threads running around. The September issue of The Exascale Report is out with a feature story on this very topic:

Debugger developers will need to be invited to the table at the earliest stages in order to make their requirements known. They require access to the architecture and need to know the number and nature of the cores and the role of special purpose processors. They need to know the programming model and require a compiler and OS that have the right hooks to the debugger itself, and more. Beyond these “basics” are the formidable tasks of ensuring acceptable performance and creating interfaces that make the tool useful, perhaps even intuitive, providing assistance in the interpretation of unprecedented complexity.

Writer Bob Feldman does a great job with this piece and he brings in some of the folks who are seated at the table today including Chris Gottbrath of Rogue Wave, Dong Ahn of LLNL, David Lecomber of Allinea, and Michael Wolfe from PGI.

Other stories in this issue of The Exascale Report:

  • John Kirkley:  “UHPC Revisited: An Interview with DARPA Program Manager Bill Harrod”
  • Mike Bernhardt:  “Dracott on Intel’s Exascale Labs in Europe” (Feature interview with Intel’s Richard Dracott)
  • Mike Bernhardt:  “If They Build It, Who Will Run It?” (Is HPC leadership quietly slipping out of the United States?)

And finally, find out who said this:

“I’ll make a claim. There will be no general purpose exascale machine ever built that anyone can afford to operate, much less buy.”

Well, that would solve the debugging problem, anyway.