Sponsored Post: Five Reasons to Celebrate Debugging at Scale

Print Friendly, PDF & Email
David Lecomber

David Lecomber

We all have our objectives for scalability. It may mean hundreds, thousands, or millions of cores to you.

But spare a thought for the software teams.

We need developers and application analysts in order to scale on that new machine. Do we expect only optimization?

In reality, there is a necessary step before this: debugging.

What should every application developer know about debugging at scale?

1. Debugging at scale is necessary

Experience shows, when moving to higher scales, errors are inevitable.

Bugs are not all the same but patterns do recur.

The cumulative probability of independent, extremely rare bugs at extreme process counts, or arithmetical overflow and communication race conditions are frequent trouble makers that occur more often at scale.

If your application is rock-solid at small scale, don’t waste time downsizing the problem to reproduce an error that may not exist at the smaller scale.

2. Debugging at scale is fast

Allinea DDT, the parallel debugger, and Allinea MAP, the parallel performance profiler, are part of our unified suite of scalable tools.

Our tools work with your system to rapidly build up a scalable infrastructure that makes every operation superfast. They’re so quick to start that there’s no time for the kettle to boil, let alone a coffee break!

Large collective debugging operations such as stepping all the processes, viewing all their stacks and comparing data, take a fraction of a second even at extreme scale.

Use the debugger at the size of your problem, discover how your application truly behaves.

3. Debugging at scale is easy

The user interface is as high a priority as scaling the performance. Innovation here brings tremendous benefit.

We provide unique insight into unusual patterns. For example, Allinea DDT automatically plots graphs of variable values across processes and color highlights changes – helping you spot the unexpected.

Use the graphical insights, look for outliers – be suspicious of processes and data that do not fit the pattern.

4. Debugging at scale is priceless

Your time and machine time matters. Crashing simulations are worth nothing – they don’t give PhDs for those.

Debugging is a science – apply logic and reason, don’t repeat yourself. One session in a debugger is often enough to fix the issue.

Be organized and methodical and you can make the most out of the machine time that you have.

5. Debugging is the fastest route to solving your problem

Contrast the handful of minutes it takes users to fix bugs at 50,000 or 100,000 cores and higher using Allinea DDT against the alternatives.

A scalable debugger will run to the crash, the exact location, and keep processes alive so that you can understand the whole picture.

Now imagine a debugger that could not scale when you need it the most?

Imagine putting a print statement into code, deciphering output from all the processes, and repeating until you find the exact point of the crash?

No wonder the first reaction can be despair.

Cast out those old ways – they’ve been on borrowed time for too long – and move to a scalable debugger!

Whilst countless applications use Allinea Software’s tools to leap beyond Petascale, it’s your scale and the ambition of your scale that matters.

There’s an old saying: once accustomed to using a debugger, then trying to debug without a debugger makes you feel like a blind man in a dark room looking for a black cat that isn’t there.

That’s just one reason why Allinea Software’s tools perform, at any scale.

About the Author: David Lecomber is CEO and Founder of Allinea Software.