Reaching Exascale with Volunteer Computing?

Print Friendly, PDF & Email

Over at the ISC Blog, Ad Emmen from Genias Benelux writes about the possibility of building an Exascale computer using volunteer computing.

If we adopt this simplistic view, you can reach Exascale computing today! The only thing you need to do is donate your unused computing time by connecting your computer to a volunteer computing grid (you’ll need some 100 million friends to accomplish this mission. It would be risky to count on just 50 million because they might not be available all the time). Of course the volunteer computing grids would not be able to handle the approximate 95 million additional computers connected without some additional central server hardware. Well, this actually still adds up to a very large number, because we want to integrate a huge number of computers into one big supercomputer. Estimates from the “Desktop Grids for eScience – a Road map” published by the International Desktop Grid Federation lead to an amount of 1 euro per 100 machines. So for 95 million machines that would be 950,000 Euros worth of hardware investments. A big amount of money, but still far from the 1.2 billion Euros investment planned in the E.U.

Read the Full Story.

Comments

  1. what a ridiculous proposition! The processing and data transfer overhead alone would make this a completely impractical, unusable approach.

  2. Scientist Apr 26 says

    @Tavi: for a benchmark junky, you may be right – but: , don’t laugh too early! For scientists wanting to use computation for research here are 3 points to consider:
    1) Taken MTBF into account, *real super computers* have *real issues* running anything more sophisticated than embarrassingly parallel workload. Although they have nicer names for it like “ensemble computing” it is the same class of workload you would run on a desktop grid.
    2) Next: do the math on your cluster bandwidth/core ratio: with 20GB/s central I/O and 40.000 Cores you actually have 0,5MB/s per Core. As the desktop may use 100Mbit/s download speed (upload is slower, though), its 4 Cores have 2-3MB/s per Core. So like in a *real cluster*, the central I/O is the limitation for *real use* (beside benchmarks). And Ad Emmen just calculates the cost for those central servers.
    3) Finally: they did it before! BOINC reached 1 PetaFlop/s in 2008, well ahead of RoadRunner who did it in 2009.