It seems everyone in HPC is familiar with Moore’s Law. But, just in case you missed that one, Moore’s law refers to the observation made in 1965 by Intel co-founder, Gordon E. Moore, that the number of transistors on integrated circuits doubles approximately every two years.
Then there is another important, but less quoted HPC observation known as Amdahl’s law. This one is named after computer architect Gene Amdahl, and is used to determine the maximum expected speedup for a fixed-sized problem when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.
And finally, there is one more that is perhaps not as widely known or referenced, but extremely important and relevant for several reasons. We are referring to Gustafson’s Law. This law addresses the shortcomings of Amdahl’s law which does not fully exploit the computing power that becomes available when the number of machines increases. Gustafson’s Law instead proposes that programmers tend to scale the size of the problems to use the available equipment in order to solve problems within a practical fixed time. Therefore, if faster and more parallel systems are available, larger problems can be solved in the same time.