“Back in July 2012, Whamcloud was awarded the Storage and I/O Research & Development subcontract for the Department of Energy’s FastForward program. Shortly afterward, the company was acquired by Intel. Nearly completed now, the two-year contract scope includes key R&D necessary for a new object storage paradigm for HPC exascale computing, and the developed technology will also address next-generation storage mechanisms required by the Big Data market.”
“This presentation examines the HPC performance characteristics of CAE software, and the current state of GPU parallel solvers in commercial CAE that support product design in manufacturing industries. Case studies from industry will be presented that include HPC adoption of GPUs for production CAE and HPC technology and the benefits they provide. Rapid simulation from GPUs demonstrates the potential of a novel HPC technology that can transform current practices in engineering analysis and design optimization procedures.”
“For NVLink to have its highest value it must function properly with unified memory. That means that the Memory Management Units in the CPUs have to be aware of NVLink DMA operations and update appropriate VM structures. The operating system needs to know when memory pages have been altered via NVLink DMA – and this can’t be solely the responsibility of the drivers. Tool developers also need to know details so that MPI or other communications protocols can make use of the new interconnect.”
In this video from GTC 2014, Steve Oberlin from Nvidia describes his new role as Chief Technical Officer for Accelerated Computing. Along the way, he discusses: the HPC lessons learned from the CRAY T3E and other systems, Nvidia’s plans to tackle the challenges of the HPC Memory Wall, the current status on Project Denver, and how Nvidia plans to couple to the POWER architecture in future systems.
“A key mission for OpenSFS to accomplish is to keep attendees up to date on where Lustre is at. In addition to the technical talks identifying Lustre’s current capabilities, departing community representative director, Tommy Minyard, will deliver an OpenSFS commissioned report from an independent organization examining the current state of Lustre.”
In this slidecast, Rob Clyde from Adaptive Computing describes Big Workflow — the convergence of Cloud, Big Data, and HPC in enterprise computing. “The explosion of big data, coupled with the collisions of HPC and cloud, is driving the evolution of big data analytics,” said Rob Clyde, CEO of Adaptive Computing. “A Big Workflow approach to big data not only delivers business intelligence more rapidly, accurately and cost effectively, but also provides a distinct competitive advantage.”