Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: What is Driving Heterogeneity in HPC?

Wen-mei Hwu from the University of Illinois at Urbana-Champaign presented this talk at the Blue Waters Symposium. “In the 21st Century, we are able to understand, design, and create what we can compute. Computational models are allowing us to see even farther, going back and forth in time, learn better, test hypothesis that cannot be verified any other way, and create safe artificial processes.”

Video: Parallel I/O Best Practices

In this video from the 2016 Blue Waters Symposium, Andriy Kot from NCSA presents: Parallel I/O Best Practices.

Video: Effect and Propagation of Silent Data Corruption in HPC Applications

“Modern HPC systems are complex due to the sheer number of components that comprise them. With this complexity comes the reality of failures. One particular damaging and little understood type of failure is silent data corruption (SDC). SDC occurs when program state changes without intervention of the application or the system. An understanding of how applications handle state perturbations and how these corrupted values propagate through HPC applications is key to mitigating its effects. In this talk, we present our results from fault injection experiments on an Algebraic Multigrid linear solver.”

Video: Algorithms for Extreme-Scale Systems

Bill Gropp from the University of Illinois at Urbana-Champaign presented this talk at the Blue Waters Symposium. “The large number of nodes and cores in extreme scale systems requires rethinking all aspects of algorithms, especially for load balancing and for latency hiding. In this project, I am looking at the use of nonblocking collective routines in Krylov methods, the use of speculation and large memory in graph algorithms, the use of locality-sensitive thread scheduling for better load balancing, and model-guided communication aggregation to reduce overall communication costs. This talk will discuss some current results and future plans, and possibilities for collaboration in evaluating some of these approaches.”

Addressing Climate Change Uncertainties with Petascale Computing

“This collaborative research between the University of Illinois, the National Center for Atmospheric Research, and the University of Maryland is aimed at using the Blue Waters petascale resources to address key uncertainties associated with the numerical modeling of the Earth’s climate system and the ability to accurately analyze past and projected future changes in climate.”

Video: Petascale Supercomputing for Space-Based Earth Science

“We have made substantial progress towards three transformative contributions: (1) we are the first team to formally link high-resolution astrodynamics design and coordination of space assets with their Earth science impacts within a Petascale “many-objective” global optimization framework, (2) we have successfully completed the largest Monte Carlo simulation experiment for evaluating the required satellite frequencies and coverage to maintain acceptable global forecasts of terrestrial hydrology (especially in poorer countries), and (3) we have evaluated the limitations and vulnerabilities of the full suite of current satellite precipitation missions including the recently approved Global Precipitation Measurement (GPM) mission. This work illustrates the tradeoffs and consequences of a collapse in the current portfolio of rainfall missions.

Video: Introduction to XSEDE 2.0 and Beyond

This presentation will briefly review XSEDE, its past mission and accomplishments, and give insight into the direction and vision for the second round of XSEDE.

Ed Seidel Presents: Supercomputing in an Era of Big Data and Big Collaboration

“Supercomputing has reached a level of maturity and capability where many areas of science and engineering are not only advancing rapidly due to computing power, they cannot progress without it. I will illustrate examples from NCSA’s Blue Waters supercomputer, and from major data-intensive projects including the Large Synoptic Survey Telescope, and give thoughts on what will be needed going forward.”

GPU Accelerated Quantum Chemistry: A New Method

“The ability to accurately and efficiently study the absorption spectra of large chemical systems necessitates the development of new algorithms and the use of different architectures. We have developed a highly parallelizable algorithm in order to study excited state properties with ab initio electronic structure theory. This approach has recently been implemented to take advantage of graphical processing units to further improve efficiency.”

Video: Towards Inevitable Convergence of HPC and Big Data

Satoshi Matsuoka from the Tokyo Institute of Technology discusses Big Data at the NCSA Blue Waters Symposium. “The trend towards convergence is not only strategic however but rather inevitable as the Moore’s law ends such that sustained growth in data capabilities, not compute, will advance the capacity and thus the overall capacities towards accelerating research and ultimately the industry.”