In this video preview of SC11, Eric Bohm from the Illinois Parallel Programming Laboratory discusses the challenges and solutions involved scaling the NAMD molecular dynamics application to support both extremely large systems and to run on extremely large machines.
A 100-million-atom biomolecular simulation with NAMD is one of the three benchmarks for the NSF-funded sustainable petascale machine. Simulating this large molecular system on a petascale machine presents great challenges, including handling I/O, large memory footprint and getting good strong-scaling results. In this paper, we present parallel I/O techniques to enable the simulation. A new SMP model is designed to efficiently utilize ubiquitous wide multicore clusters by extending the Charm++ asynchronous message-driven runtime. We exploit node-aware techniques to optimize both the application and the underlying SMP runtime. Hierarchical load balancing is further exploited to scale NAMD to the full Jaguar PF Cray XT5 (224,076 cores) at Oak Ridge National Laboratory, both with and without PME full electrostatics, achieving 93% parallel efficiency (vs 6720 cores) at 9ms per step for a simple cutoff calculation. Excellent scaling is also obtained on 65,536 cores of the Intrepid Blue Gene/P at Argonne National Laboratory.
Bohm will present this paper along with fellow-authors Chao Mei, Yanhua Sun, Gengbin Zheng, James C. Phillips, Chris Harrison, Laxmikant V. Kaleothers at SC11 on Nov. 16, 2011.