The all-new Journal of Supercomputing Frontiers and Innovations has published published a new paper entitled: Toward Exascale Resilience – 2014 Update. Written by Franck Cappello, Al Geist, William Gropp, Sanjay Kale, Bill Kramer, and Marc Snir, the paper surveys what the community has learned in the past five years and summarizes the research problems still considered critical by the HPC community.
“This paper provides information and benchmarks necessary to make the choice of the best file system for a given application from a number of the available options: RAM disks, virtualized local hard drives, and distributed storage shared with NFS or Lustre. We report benchmarks of I/O performance and parallel scalability on Intel Xeon Phi coprocessors, strengths and limitations of each option.”
“Over the past two and half years, the team worked on a DOE-funded project, Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT), to combine new and existing battery models into engineering simulation software to shorten design cycles and optimize batteries for increased performance, safety and lifespan. In order to achieve these goals the team has been modeling thermal management, electrochemistry, ion transport and fluid flow.”
“The continuous demands of competition help maintain strong markets for high performance computing systems, even amidst apparent paradoxes. Our surveys show that HPC users are bucking the trend of reducing spending on servers, and research indicates modest growth for HPC in all economic sectors (industrial, academic, and government) over the next four years.”
“In this paper, we consider a coupled solution in which a multiphase flow simulator is coupled to an analysis approach used to extract the interfacial geometries as the flow evolves. This has been implemented using MPI to target heterogeneous nodes equipped with GPUs. The GPUs evolve the multiphase flow solution using the lattice Boltzmann method while the CPUs compute upscaled measures of the morphology and topology of the phase distributions and their rate of evolution. Our approach is demonstrated to scale to 4,096 GPUs and 65,536 CPU cores to achieve a maximum performance of 244,754 million-lattice-node updates per second (MLUPS) in double precision execution on Titan.”
Colfax Research has published a new white paper entitled “Cluster-Level Tuning of a Shallow Water Equation Solver on the Intel MIC Architecture.” Written by Andrey Vladimirov, the paper demonstrates the optimization of the execution environment of a hybrid OpenMP+MPI computational fluid dynamics code (shallow water equation solver) on a cluster enabled with Intel Xeon Phi coprocessors.