Fred Streitz presented this talk at the DOE Computational Science Graduate Fellowship Review. Streitz leads efforts to develop HPC applications that push the limits of leadership-class computational capability to address forefront scientific problems. His current focus, as Director of the HPC Innovation Center, is broadening the use of high performance computing by U.S. industry to promote global competitiveness.
NERSC has accepted a selection of key DOE science projects into its NERSC Exascale Scientific Applications Program, a collaborative effort in which NERSC will partner with code teams to prepare for the NERSC-8 Cori manycore architecture. NESAP represents an important opportunity for researchers to prepare application codes for the new architecture and to help advance […]
“MPI is in the national interest. The U.S. government tasks Lawrence Livermore National Laboratory with solving the nation’s and the world’s most difficult problems. This ranges from global security, disaster response and planning, drug discovery, energy production, and climate change to name a few. To meet this challenge, LLNL scientists utilize large-scale computer simulations on Linux clusters with Infiniband networks. As such, MVAPICH serves a critical role in this effort. In this talk, I will highlight some of this recent work that MVAPICH has enabled.”
IBM Sequoia is a petascale Blue Gene/Q supercomputer constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It was delivered to the LLNL in 2011 and was fully deployed in June 2012. Sequoia is #3 on the TOP500 ranking of June 2014.
In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the new Cray CS-Storm supercomputer based on Nvidia GPUs. After that, the discussion turns to exascale investment recommendations coming out of a new report from a Department of Energy Task Force.
“Confronting power limitations and the high cost of data movement, new supercomputing architectures within the DOE are requiring users make changes to application codes to achieve high performance. More specifically, users will need to exploit greater on-node parallelism and longer vector units, and restructure code to take advantage of memory locality. In this presentation you will learn about coming architectural trends and what you can do now to start preparing your application.”
A new report on the problems and opportunities that will drive the need for next generation HPC has been released by the Task Force on High Performance Computing of Secretary of Energy Advisory Board. Commissioned by Secretary of Energy, Dr. Ernest J. Moniz, the report includes recommendations as to where the DOE and the NNSA should invest to deliver the next class of leading edge machines by the middle of the next decade.