Storage and data management have arguably become the most important HPC “pain points” already, with access densities a particularly troubling issue. Many HPC sites are doubling their storage capacities every two to three years, but adding capacity does not address the access density, data movement, and related storage issues many HPC buyers face. When this happens, your investments in processing, networking, middleware and applications are choked off by bottlenecks in your storage infrastructure. If you’re looking to maximize throughput of your technical computing infrastructure, storage performance often holds the key.
IBM Sequoia is a petascale Blue Gene/Q supercomputer constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It was delivered to the LLNL in 2011 and was fully deployed in June 2012. Sequoia is #3 on the TOP500 ranking of June 2014.
Cray in St. Paul, MN is seeking a Storage Engineer in our Job of the Week.
In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the new Cray CS-Storm supercomputer based on Nvidia GPUs. After that, the discussion turns to exascale investment recommendations coming out of a new report from a Department of Energy Task Force.
“Confronting power limitations and the high cost of data movement, new supercomputing architectures within the DOE are requiring users make changes to application codes to achieve high performance. More specifically, users will need to exploit greater on-node parallelism and longer vector units, and restructure code to take advantage of memory locality. In this presentation you will learn about coming architectural trends and what you can do now to start preparing your application.”
A new report on the problems and opportunities that will drive the need for next generation HPC has been released by the Task Force on High Performance Computing of Secretary of Energy Advisory Board. Commissioned by Secretary of Energy, Dr. Ernest J. Moniz, the report includes recommendations as to where the DOE and the NNSA should invest to deliver the next class of leading edge machines by the middle of the next decade.
Achieving good performance on any system requires balancing many competing factors. More than just minimizing communication (or floating point or memory motion), for high end systems the goal is to achieve the lowest cost solution. And while cost is typically considered in terms of time to solution, other metrics, including total energy consumed, are likely to be important in the future. Making effective use of the next generations of extreme scale systems requires rethinking the algorithms, the programming models, and the development process. This talk will discuss these challenges and argue that performance modeling, combined with a more dynamic and adaptive style of programming, will be necessary for extreme scale systems.