Douglas Eadline posted a bit of an introspective piece via his HPC column at Linux Magazine. Put simply, he lays out what may become a growing chasm between effective multiprocessor programming paradigms and those designed for large-scale [greater than 32 core] muticore systems. If you follow the good Doctor’s articles via ClusterMonkey and Linux Mag, you’ll know that he’s written quite a few overviews of basic parallel programming methods. He’s written and/or collaborated on numerous parallel programming projects over the years, so he can certainly walk the walk. Aside from experience, I can personally attest to the fact that he’s a sharp guy. On the subject of multicore programming, I full heartedly agree with him.
Borrowing from Doug’s article, “writing good software is hard. Period. Writing good parallel software is harder still, but not impossible. Understanding the basics is essential in either case.” I’m a classically educated software engineer, born again mechanical engineer. What I see happening in the software corner of our beloved HPC industry is somewhat frightening. We continue to spend an excessive amount of time developing and scaling multiprocessor programming methodologies [such as MPI] in order to scale the upper crust of computational workloads to higher realms of existence. Interconnect technologies will continue to increase in complexity and performance with ongoing system development. This certainly warrants new ways of thinking about message passing and super-scale software development. However, what about the little guy?
Case in point. I attribute my love of HPC to a wild-haired PhD mechanical engineer I call ‘Dad.” I remember the gleam in his eye when I substituted his RISC 6000 user manuals for Scooby Doo cartoons. He continues to operate that system to the best of his abilities. One can only wonder how he runs three dimensional shock physics codes on a workstation that is no more powerful than an iPhone. Good news! The company Dogberts have announced that he will finally receive an upgrade next year. At which point, my poor impressionable father will be forced to not only upgrade his hardware, but his entire software infrastructure. At which point, I’ll receive a phone call asking what the lasted and greatest “engineering” software paradigms are. Are there any?
This is where the vendor audience begins to throw their product pitches and tomatos. One can certainly argue useful multicore programming paradigms live in Matlab, Cilk++, OpenMP, pthreads and a host of other up and comers. However, are any of these solutions more effective than their predecessors? Are any of these any more effective than MPI? We are in a multicore world. We shall soon embark on a massively multicore era for which there exists no effective solution for the large mass of users that have no use for scale beyond their desk. This, I call, The Eadline Split.
Please feel free to comment on this subject. Unlike many of our other articles, I’ve included quite a bit of my own personal opinion and experience. Before you comment, I highly suggest you read Dr. Eadline’s article at Linux Mag here.