Today AMD announced that for the third straight year it was awarded research grants for development of critical technologies needed for extreme-scale computing in conjunction with projects associated with the U.S. Department of Energy (DOE) Extreme-Scale Computing Research and Development Program, known as “FastForward 2.”
This Week in HPC: Exascale Bill Faces Uncertain Future in U.S. Senate, and HPC Gets in the Entrepreneurial Spirit
EPiGRAM is an EC-funded FP7 project on exascale computing. The aim of the EPiGRAM project is to prepare Message Passing and PGAS programming models for exascale systems by fundamentally addressing their main current limitations. The concepts developed will be tested and guided by two applications in the engineering and space weather domains chosen from the suite of codes in current EC exascale projects.
In this video from the 2014 Argonne Training Program on Extreme-Scale Computing, Rick Stevens from Argonne presents: Exascale, Data and Biology. “At ATPESC 2014, we captured 67 hours of lectures in 86 videos of presentations by pioneers and elites in the HPC community on topics ranging from programming techniques and numerical algorithms best suited for leading-edge HPC systems to trends in HPC architectures and software most likely to provide performance portability through the next decade and beyond.”
As the countdown to Exascale continues, Exascale-like storage problems are already showing up in today’s massively parallel, heterogeneous HPC systems. Historically, storage and I/O have kept pace with growing system demands, but, because of the limitations of spinning media and the cost of solid state storage technologies, storage performance improvements have come at a disproportionately higher cost and lower efficiency than their compute counterparts.
“As the name indicates: A NAM is basically a storage device plugged into the interconnect network of a Cluster. That sounds pretty simple and straightforward. But the underlying technology is quite new and exciting and the NAM concept enables entirely new approaches for using memory as a shared resource.”
“Exascale levels of computing pose many system- and application-level computational challenges. Mellanox as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”