In this video from ISC’14, Wolfgang Gentzsch announces the new Ubercloud Technical Computing Marketplace and Appstore.
Everything from life sciences to the financial industry are relying on HPC clusters to perform complex and critical operations. Moving forward, there will be a lot more reliance on various HPC systems. So the all-important question comes in – How do you select, deploy and manage it all? Fortunately, IBM, Intel and NCAR have teamed up to explain their view on best practices selecting an HPC cluster using the process behind building the NCAR Wyoming Supercomputing Center.
“Over the past two and half years, the team worked on a DOE-funded project, Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT), to combine new and existing battery models into engineering simulation software to shorten design cycles and optimize batteries for increased performance, safety and lifespan. In order to achieve these goals the team has been modeling thermal management, electrochemistry, ion transport and fluid flow.”
High performance technical computing continues to transform the capabilities of organizations across a range of industries—helping them to tackle unprecedented big data analysis, generate competitive business advantage, and expand the limits of science and medicine. To keep pushing those boundaries, organizations are continually seeking ways to get more out of their technical computing systems.
Altair has teamed up with Intel to take on this challenge and provide a suite of solutions for CAE in HPC. “For many companies it is simply too hard and/or costly to get started with HPC in the first place. Altair addresses this through an easy to use web interface that is application aware (Compute Manager).”
“In this talk we consider multiphysics applications our group has been working in the last years from algorithmic, software and hardware perspectives. These encompass engineering applications such extreme wave-structure interaction phenomena in floating structures, coupled multi-material flow with heat transfer and polydisperse mixtures, the latter with applications in geology.”
The requirement for both application scaling (capability computing) and system throughput (capacity computing) continues to grow. The “THUMS” human body model has 1.8 million elements, and safety simulations of over 50 million elements are on the roadmap. Models of this size will require scaling to thousands of cores just to maintain the current turnaround time.