Today Intel announced that the company will deliver two next-generation supercomputers to Argonne National Laboratory. “The contract is part of the DOE’s multimillion dollar initiative to build state-of-the-art supercomputers at Argonne, Lawrence Livermore and Oak Ridge National Laboratories that will be five to seven times more powerful than today’s top supercomputers.”
Over at Live Science, Shannon Hall writes that new global map of the world’s oceans is so visually stunning that it could be mistaken for art. Computed on LANL supercomputers, the simulation is a component of the DOE’s Accelerated Climate Model for Energy (ACME), which is expected to be the most complete climate and Earth system model once it is finished.
“Today, we will hear from a distinguished panel of witnesses about the importance of high performance computing to American technological competitiveness, specifically focusing on the Department of Energy’s Advanced Scientific Computing Research program, also known as the “ASCR” program within the Office of Science.”
In this video, Bill Harrod from the Department of Energy accepts the HPC Vanguard Award from Rich Brueckner and Thomas Sterling at SC14. “Launched by The Exascale Report in 2013, the HPC Vanguard Award recognizes critical leaders in the HPC community’s strategic push to achieve exascale levels of supercomputing performance.”
In this video, the Radio Free HPC team meets at SC14 in New Orleans to discuss the recent news that Nvidia & IBM will build two Coral 150+ Petaflop Supercomputers in 2017 for Lawrence Livermore and Oak Ridge National Laboratories. The two machines will feature IBM POWER9 processors coupled with Nvidia’s future Volta GPU technology. NVLink will be a critical piece of the architecture as well, along with a system interconnect powered by Mellanox.
In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the new Cray CS-Storm supercomputer based on Nvidia GPUs. After that, the discussion turns to exascale investment recommendations coming out of a new report from a Department of Energy Task Force.
“Successful computational scientists are experts in both a scientific field, such as chemistry, physics, or astrophysics, knowledgeable about both mathematical representations and algorithmic implementations, and also specialize in developing and optimizing scientific application codes to run on computers, both large and small. A truly successful computational science investigation requires the “three A’s”: a compelling Application, the appropriate Algorithm, and the underlying Architecture.”
“For those who haven’t been following the details of one of DOE’s more recent procurement rounds, the NERSC-8 and Trinity request for proposals (RFP) explicitly required that all vendor proposals include a burst buffer to address the capability of multi-petaflop simulations to dump tremendous amounts of data in very short order. The target use case is for petascale checkpoint-restart, where the memory of thousands of nodes (hundreds of terabytes of data) needs to be flushed to disk in an amount of time that doesn’t dominate the overall execution time of the calculation.”