“As data explodes in volume, velocity and variety, and the processing requirements to address business challenges become more sophisticated, the line between traditional and high performance computing is blurring,” said Bill Mannel, vice president and general manager, HPC and Big Data, HP Servers. “With this alliance, we are giving customers access to the technologies and solutions as well as the intellectual property, portfolio services and engineering support needed to evolve their compute infrastructure to capitalize on a data driven environment.”
In this video, Professors Jean Frechet and David Keyes describe their vision for KAUST’s new Shaheen II Cray XC40 supercomputer. “The initial configuration of Shaheen II will feature nearly 200,000 x86 processor cores. At initial delivery, anticipated in March 2015, Shaheen II will deliver over 5 petaflops of peak performance, with 17.6 petabytes of Sonexion Lustre storage and greater than 790 terabytes of memory.”
“Modern HPC systems are complex due to the sheer number of components that comprise them. With this complexity comes the reality of failures. One particular damaging and little understood type of failure is silent data corruption (SDC). SDC occurs when program state changes without intervention of the application or the system. An understanding of how applications handle state perturbations and how these corrupted values propagate through HPC applications is key to mitigating its effects. In this talk, we present our results from fault injection experiments on an Algebraic Multigrid linear solver.”
“Numerical simulations on supercomputers play an ever more important role in astrophysics. They have become the tool of choice to predict the non-linear outcome of the initial conditions left behind by the Big Bang, providing crucial tests of cosmological theories. However, the problem of galaxy and star formation confronts us with a staggering multi-physics complexity and an enormous dynamic range that severely challenges existing numerical methods.”
In this video, Kyle Lamb from the Infrastructure Team at Los Alamos National Lab describes the unique challenges he faces at a facility known for being at the forefront of technology. Kyle addresses the future of storage for High Performance Computing and the ways LANL is partnering with Seagate to tackle the changes on the horizon.
Bill Gropp from the University of Illinois at Urbana-Champaign presented this talk at the Blue Waters Symposium. “The large number of nodes and cores in extreme scale systems requires rethinking all aspects of algorithms, especially for load balancing and for latency hiding. In this project, I am looking at the use of nonblocking collective routines in Krylov methods, the use of speculation and large memory in graph algorithms, the use of locality-sensitive thread scheduling for better load balancing, and model-guided communication aggregation to reduce overall communication costs. This talk will discuss some current results and future plans, and possibilities for collaboration in evaluating some of these approaches.”