Today the ISC Cloud & Big Data Conference announced that Dr. Jan Vitt from DZ Bank will keynote their event in September.
“Large scale HPC IO is usually done either with a file per process or to a single shared file. Single shared file IO does not scale well in Lustre compared to file per process. This presentation from Cray’s Patrick Farrell will give details, examine the reasons for this, and explore existing and potential solutions. Group locks and a new feature, lock ahead, will be discussed in the context of strided IO.”
“Pleiades, one of the world’s most powerful supercomputers, represents NASA’s state-of-the-art technology for meeting the agency’s supercomputing requirements, enabling NASA scientists and engineers to conduct modeling and simulation for NASA missions. Powered by Lustre, this distributed-memory SGI ICE cluster is connected with InfiniBand in a dual-plane hypercube technology.”
“A little over a year ago LANL’s HPC Division purchased and fielded our first general purpose InfiniBand-based Lustre parallel file system. This new Lustre deployment, being the first of several similar planned deployments, gave us the opportunity to design a new storage backbone from the ground up and to gain in depth experience with and insight into Lustre technology in order to facilitate the installment and configuration of future systems.”
“The second generation of SDSC’s Data Oasis Lustre storage is coming online to support Comet, a new XSEDE cluster targeted at the long tail of science. The servers have been designed with Lustre on ZFS in mind, and also update the network to use bonded 40GbE interfaces. The raw storage totals 7.7 PB and are again based on commodity hardware provided by Aeon Computing, maintaining our focus on cost.”