Balancing Amdahl’s Law vs. Gustafson-Barsis’ Law

Print Friendly, PDF & Email
amdahl's law

Over at Dr. Dobbs, Michael McCool, Arch Robison, and James Reinders from Intel write that the two laws of parallel performance quantify strong versus weak scalability and illustrate the balancing act that is parallel optimization.

amdahl's law

Amdahl’s Law: speedup is limited by the non-parallelizable serial portion of the work.

The advantage of parallel programming over serial computing is increased computing performance. Parallel performance improvements can be achieved by way of reducing latency, increasing throughput, and reducing CPU power consumption. Because these three factors are often interrelated, a developer must balance all three to ensure that the efficiency of the whole is maximized. When optimizing performance, the measurement known as “speedup,” enables a developer to track changes in the latency of specific computational problems as the number of processors is increased. The goal for optimizing may be to make a program run faster with the same workload (reflected in Amdahl’s Law) or to run a program in the same time with a larger workload (Gustafson-Barsis’ Law). This article explores the basic concepts of performance theory in parallel programming and how these elements can guide software optimization.

Read the Full Story.