As the global HPC community forms circles of opinion on the challenges of making exascale a reality, it seems ‘multicore optimization’ — at some level — will have to be a key ingredient. How do you define ‘multicore optimization’, and what role do you see this technology playing in the development of production exascale systems?
Everyone knows Moore’s Law, and multi-core processor advances will play an important role in exascale evolution. But a much less discussed canon, Amdahl’s Law (of Serialization) will become equally, or more, prominent. As we deploy servers with 64/128/256 cores (and beyond), we need to address how applications take advantage of massively parallel processing capacity given that most applications today are serial or only lightly parallel designs. While advances – tools, libraries, education – are taking place that will, over time, help developers parallelize applications to a greater degree, ultimately most problems are not massively parallel by nature, with the degree of parallelism described by Amdahl’s Law. So, we need to parallelize applications when and where possible, and to also recognize that effectively utilizing many system cores will require that many concurrent tasks need to run safely and predictably within a single system.