The U.S. Department of Energy announced two new High Performance Computing (HPC) awards that puts the nation on a fast-track to next-generation exascale computing,helping to advance U.S. leadership in scientific research and promote America’s economic and national security.
In this video, the Radio Free HPC team meets at SC14 in New Orleans to discuss the recent news that Nvidia & IBM will build two Coral 150+ Petaflop Supercomputers in 2017 for Lawrence Livermore and Oak Ridge National Laboratories. The two machines will feature IBM POWER9 processors coupled with Nvidia’s future Volta GPU technology. NVLink will be a critical piece of the architecture as well, along with a system interconnect powered by Mellanox.
Today AMD announced that for the third straight year it was awarded research grants for development of critical technologies needed for extreme-scale computing in conjunction with projects associated with the U.S. Department of Energy (DOE) Extreme-Scale Computing Research and Development Program, known as “FastForward 2.”
This Week in HPC: Exascale Bill Faces Uncertain Future in U.S. Senate, and HPC Gets in the Entrepreneurial Spirit
EPiGRAM is an EC-funded FP7 project on exascale computing. The aim of the EPiGRAM project is to prepare Message Passing and PGAS programming models for exascale systems by fundamentally addressing their main current limitations. The concepts developed will be tested and guided by two applications in the engineering and space weather domains chosen from the suite of codes in current EC exascale projects.
In this video from the 2014 Argonne Training Program on Extreme-Scale Computing, Rick Stevens from Argonne presents: Exascale, Data and Biology. “At ATPESC 2014, we captured 67 hours of lectures in 86 videos of presentations by pioneers and elites in the HPC community on topics ranging from programming techniques and numerical algorithms best suited for leading-edge HPC systems to trends in HPC architectures and software most likely to provide performance portability through the next decade and beyond.”
As the countdown to Exascale continues, Exascale-like storage problems are already showing up in today’s massively parallel, heterogeneous HPC systems. Historically, storage and I/O have kept pace with growing system demands, but, because of the limitations of spinning media and the cost of solid state storage technologies, storage performance improvements have come at a disproportionately higher cost and lower efficiency than their compute counterparts.
“As the name indicates: A NAM is basically a storage device plugged into the interconnect network of a Cluster. That sounds pretty simple and straightforward. But the underlying technology is quite new and exciting and the NAM concept enables entirely new approaches for using memory as a shared resource.”