Fred Streitz from Lawrence Livermore National Lab presented this talk at the Stanford HPC Conference. “The HPC Innovation Center bridges American industry’s growing need for advanced solution to complex challenges in research and development with LLNL’s forefront supercomputing and disruptive scientific capabilities.”
In this whitepaper from Adaptive Computing – we learn about the new concept around Big Workflow and how it directly addresses the needs of critical, data-intensive, applications. By creating more intelligence around data control, Big Workflow directly provides a way for big data, HPC, and cloud environments to interoperate, and do so dynamically based on what applications are running.
Ramesh Balakrishnan from Argonne presented this talk at the Stanford HPC Conference. “The main scientific challenge in fluid dynamics remains that of gaining better insight into the physics of turbulence and its role in the transfer of momentum, heat, and mass in engineering applications which include the aerodynamics of high lift devices, chemically reacting flows in combustion systems, such as combustors of jet engines, and the aeroacoustics of low and high speed flows.”
Babek Hejazialhosseini from ETH presented this talk at the Stanford HPC Conference. “This talk outlines the challenges that hinder the effective solution of complex flows on contemporary supercomputers. It demonstrates several generalizable techniques towards achieving unprecedented performance on both IBM Blue Gene/Q and Cray supercomputers. Simulation of cloud cavitation collapse, a challenging flow problem with a broad range of applications, is presented.”
Nicholas Dube from HP presented this talk at the Adaptive Computing booth at SC13. “The ESIF data center is designed to achieve an annualized average power usage effectiveness (PUE) rating of 1.06 or better. Going beyond traditional PUE measurements, the NREL HPC Data Center is using warm-water liquid cooling for its high-power computer components, then capturing and reusing that waste heat as the primary heat source in the ESIF offices and laboratory space.”
“For environments where large memory systems are critical – bio informatics, legacy databases i.e. Big Data, we have focused on a lot of performance enhancements. We strive to make large memory systems as fast as possible. It is interesting to note that in some cases, our VMs are faster than physical machines. We do this by prefetching and caching data based on our understanding of memory placement and access patterns.”
“The INCITE and PRACE programs give access to increasing resources allowing these technologies to be applied to industrial scale systems. From past and ongoing research examples performed at CERFACS, this presentation highlights the scientific breakthroughs allowed by HPC on exascale machines for reacting flows for gas turbines and explosions in buildings.”