Sign up for our newsletter and get the latest HPC news and analysis.

Andrew's Corner: When Benchmarking Goes Wrong

Our colleague, Andrew Jones, has written a great article for this month’s ZDnet HPC feature on the much contended issue of benchmarking.  Benchmarking has become quite the black-art of high performance computing.  Procurements often rely upon the results of a small subset of code results on a specific platform.  Acceptance testing via benchmarking has become standard practice, especially in large installations.  However, as Andrew points out, organizations should be very careful to utilize benchmarking results correctly.

For example, people often assume the system with the best benchmark will win the order. Sometimes bidders make this assumption, seeing the benchmarking as the most concrete aspect of the proposal evaluation process, and sometimes it is buyers who think the benchmark will provide an unambiguous winner.

There are only a very select few customers that have the unique ability to ignore everything except performance.  Reliability, service, partnership and price are also very important in determining a procurement winner.  As Andrew says, “benchmarks will not — or should not — exclusively pick the winner, but they can and should be used to narrow the field and help avoid buying a turkey.”

What has become one of the most important use cases of benchmarking is for acceptance testing. Benchmarking serves several goals in this situation.  First, it allows the customer to evaluate the proposed performance of the machine.  It essentially keeps the vendor honest by proving that System X will do Application_A this fast.  It also serves to weed out many initial system gremlins.  What if a node has a bad DIMM?  What if an Infiniband cable was damaged in shipping?  These are all things that could feasibly be fixed while the installation technicians are still on site.  However, one should also consider other aspects of acceptance testing.  These may include system uptime, file system reliability and power/cooling considerations.

Benchmarks need to be a key part of this process. If the system delivered cannot match the bid and rejects it, then negotiate a remedy in the form of a discount or extra performance, or, if necessary, make a business decision to accept the solution as-is, knowing the risk.

At the end of the day, benchmarking can be a very powerful tool if used correctly.  For a great read, check out Andrew’s full article here.

Resource Links: