Sign up for our newsletter and get the latest HPC news and analysis.

Scali Touts SPEC MPI2007 Results

scaliScali, proprietors of several high performance computing software packages, recently cut a press release entitled “Scali demonstrates that the choice of MPI is three times more important than the selection of compilers.” Uhhhh…. What!?

Scali is touting the results of its latest version of Scali MPI Connect version 5.6.1 on the SPEC MPI2007 benchmark. The results reflect the 13 applications contained with the SPEC MPI2007 medium suite running on 32 nodes [128 cores]. Apparently, three of the applications came within +/- 2 percent of a similar system running HP-MPI . However, they are seeing performance results of 21 percent faster on Socorro and 336 percent faster on POP2 [both over HP-MPI]. Scali went on to run the same suite of benchmarks using different compilers. The results looked as if changing compilers only allowed for a 6 percent increase in performance.

Now, I can already hear all you performance gurus grinding your teeth. I, for one, have run quite a few different MPI variants in production over the years [open source and commercial]. I’ve actually sat and watched Scali’s MPI Connect outperform several open source varients on POP [...thanks Jim Tuccillo]. However, I vehemently disagree with the statement that MPI is everything. What happens when your code is only utilizing MPI for very simple and occasional synchronization tasks? I highly doubt that Scali’s `MPI_Barrier` is 18 percent more efficient than any other variant.

Just for the record, I’m not beating up on Scali. I truly believe MPI Connect is a good product for several applications [POP being one of them]. However, lets go ahead and say it together: “…Its application dependent.”

Read the full post here.

Comments

  1. Scali’s release is helpful to the extent that it drives competition on performance in MPI implementations. I notice that Scali didn’t publish comparisons against OpenMPI this time. I suspect those numbers would be pretty close.

    It’s a good point that the MPI implementation can influence performance as much as the compiler, but mileage varies greatly depending on app, implementation, interconnect, and cost.

  2. HPCer, I definitely agree. In doing our benchmark comparisons for POP over a year ago, we used MVAPICH, OpenMPI and Scali. Scali performed the best with MVAPICH pulling up in a close second. I have, however, seen applications that heavily favor MVAPICH. Especially in situations where one has the ability to utilize multiple Infiniband rails with very large messages… but alas… :-)

  3. Being involved in the development of Scali MPI Connect 5.6.1 and the benchmarking leading to this press release, allow me to comment:

    As to John Leidel’s analysis; I do not interpret Scali’s press release as to generalize and state that our MPI is 18% faster than HP-MPI for _all_ applications. If you do, you have my apologies. That was not the intent.

    The number comes from a particular sized system and describes the difference of the SPECmpiM_base2007 metric of said MPI implementations. This metric is derived from running 13 different applications. How much Scali MPI Connect excels as compared to HP-MPI over the 13 applications which constitute the SPEC MPI medium suite varies as documented on the SPEC result pages. Obviously, the MPI Connect advantage is not a constant 18% over all the applications!

    Further, our claim that the compiler has a 6% impact is sort of a worst case. Here we have compared Intel 10.1 compilers vs. PathScale 3.0. A more fair comparison would have been to also use the latest PathScale compiler, version 3.1. Unfortunately, we were unable to complete our benchmarking using PathScale 3.1 due to time constraints. If we had used it, the compiler difference would have been 4%.

    As to HPCer’s comment around OpenMPI; one can only speculate as to why no SPEC MPI2007 results have been published using OpenMPI. There are companies around which have a strategic decision around OpenMPI and at the same time being a member of SPEC’s HPG group. And be aware, SPEC MPI2007 is a showcase for system vendors in the HPC space. I take it that if OpenMPI is a differential, positive, value add, it would have been used.

  4. Rich Hickey says:

    Breath in. Breath out.

    It’s funny how a difference in viewpoint can cause a lot of contention.

    Hakon, sounds like you are in the know as far as the benchmarks were run. So, you’re very familiar with the, I don’t want to say narrow because that’s not quite correct, view taken in the tests. Within the realm of the testing Scali has performed wonderfully. This is good for the mid range HPC arena.

    Now the other view. John, unbind your undies. (Or pants as these bloody brits refer to them) (-: I believe the tests you’re commenting on happened on one of my computers? No derogatory comments please. And it did perform quite well compared to the other mpi alternatives available to us. However, it’s easy to look at a small subset of data and say Voila! Eureka! or Oh Crap!

    As far as my limited knowledge goes, Scali makes a good product. There may be some over hype in the article, of course there is, it’s a marketing speel. Saying that the mpi product is 3 times better than the compiler choice. Well, it’s a bit tough to swallow. Then again I’m someting of a cynic. Heck, I’ve been proven wrong before.

  5. Rich, I wasn’t being derogatory, only playing “devil’s advocate.” Indeed, the tests I was referring to were on one such system previously under your supervision. Ahhh, memories. Hakon, I was in no way trying to shoot holes directly in Scali’s product. Like I said in the article, for the applications I’ve used it for, MPI Connect has worked quite well.

    …besides Rich, isn’t it past your bedtime in that part of the world? :-)

  6. It’s late in my part of the world, too, but I thought I’d jump in here as well… the point of data that grabbed my attention was the 336% increase over the HP-MPI install for POP2. Hakon’s team deserves credit for their hard work, sure, but I’m inclined to think there was something quite atypical with the HP-MPI system in that comparison.

    Or, at the very least, it should be pointed out that while Scali’s score of 9.38 (128 CPUs) is considerably faster than the HP-MPI score of 2.14, the Cambridge ‘Darwin’ system, also using the 5160 CPUs and DDR IB (InfiniPath), scores 11.5, beating both. And yes, I’m aware that there are differences in compilers, HCA hardware, connection type, even (in the HP-MPI case) network topology.

    As the first person said, though – the more competition the better. I intend to actually contact Scali soon and obtain a trial copy of their software to run my own tests.

    As an aside, I think the reason that many scores haven’t been submitted with OpenMPI yet is simply because .. well, not a lot of scores have been submitted to MPI2007, period. Give it time.

  7. Rich Hickey says:

    On the up side, this article is getting more comments than 5 others combined. A little contention can be a good thing it seems. It at least gets people chatting.

    Normally the differences would be about religion, politics, spicy food, etc. But No, it’s about an mpi benchmark. Nerds, a bunch of nerds. The lot of us.

    My main complaint is I can’t get the smily face to work properly!!! :-) (-: {-; ;-}

  8. Rich Hickey says:

    Ha! Success! My life is complete at last!! ;-) :-)

Resource Links: