Being in the right place, at the right time is certainly a key to success. So, perhaps it’s fair to ask if tools to debug exascale applications will significantly lag the availability of the new architectures, delaying the broader usefulness of a precious resource. After all, a great many things have to be in place for efficient debugging to be made possible. Debugger developers will need to be invited to the table at the earliest stages in order to make their requirements known. They require access to the architecture and need to know the number and nature of the cores and the role of special purpose processors. They need to know the programming model and require a compiler and OS that have the right hooks to the debugger itself, and more. (See sidebar “What is needed?”). Beyond these “basics” are the formidable tasks of ensuring acceptable performance and creating interfaces that make the tool useful, perhaps even intuitive, providing assistance in the interpretation of unprecedented complexity.
Even given these requirements and the early stage of exascale development, developers are moving ahead with debugger concepts in the hopes of arriving at the station in time to help the exascale ultra-express depart on schedule. In this article, we ask what an exascale debugger might look like. Will it really be different from your now average, run of the mill, petascale debugger, or the charmingly old-fashioned terascale one your wacky uncle Lou used to let you play with in the lab on the weekends? How will new tools reduce the complexity of million-way parallelism to interfaces and displays that we simple humans can comprehend and manage? And what sort of tool can run efficiently enough at scale without adding unbearable expense?