In this podcast, the Radio Free HPC team discuss Henry Newman’s recent editorial calling for a self-descriptive data format that will stand the test of time. Henry contends that we seem headed for massive data loss unless we act.
Larry Smarr presented this talk as part of NCSA’s 30th Anniversary Celebration. “For the last thirty years, NCSA has played a critical role in bringing computational science and scientific visualization to the national user community. I will embed those three decades in the 50 year period 1975 to 2025, beginning with my solving Einstein’s equations for colliding black holes on the megaFLOPs CDC 6600 and ending with the exascale supercomputer. This 50 years spans a period in which we will have seen a one trillion-fold increase in supercomputer speed.”
George Slota presented this talk at the Blue Waters Symposium. “In recent years, many graph processing frameworks have been introduced with the goal to simplify analysis of real-world graphs on commodity hardware. However, these popular frameworks lack scalability to modern massive-scale datasets. This work introduces a methodology for graph processing on distributed HPC systems that is simple to implement, generalizable to broad classes of graph algorithms, and scales to systems with hundreds of thousands of cores and graphs of billions of vertices and trillions of edges.”
IDC has announced the featured speakers for the next international HPC User Forum. The event will take place Sept. 22 in Beijing, China.
In this video from the 2016 Blue Waters Symposium, Andriy Kot from NCSA presents: Parallel I/O Best Practices.
Peter Ungaro presented this talk at the 2016 Blue Waters Symposium. “Built by Cray, Blue Waters is one of the most powerful supercomputers in the world, and is the fastest supercomputer on a university campus. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenging problems, from predicting the behavior of complex biological systems to simulating the evolution of the cosmos.”
“I am honored to have been asked to drive NCSA’s continuing mission as a world-class, integrative center for transdisciplinary convergent research, education, and innovation,” said Gropp. “Embracing advanced computing and domain collaborations across the University of Illinois at Urbana-Champaign campus and ensuring scientific communities have access to advanced digital resources will be at the heart of these efforts.”
Ed Seidel from NCSA presented this talk at The Digital Future conference in Berlin. “The National Center for Supercomputing Applications (NCSA) is a hub of transdisciplinary research and digital scholarship where University of Illinois faculty, staff, and students, and collaborators from around the globe, unite to address research grand challenges for the benefit of science and society. NCSA is also an engine of economic impact for the state and the nation, helping companies address computing and data challenges and providing hands-on training for undergraduate and graduate students and post-docs.”
“This talk will describe one new effort to embed best practices for reproducible scientific computing into traditional university curriculum. In particular, a set of open source, liberally licensed, IPython (now Jupyter) notebooks are being developed and tested to accompany a book “Effective Computation in Physics.” These interactive lecture materials lay out in-class exercises for a project-driven upper-level undergraduate course and are accordingly intended to be forked, modified and reused by professors across universities and disciplines.”
“My team at the University of Minnesota has been collaborating with the team of Falk Herwig at the University of Victoria to simulate brief events in the lives of stars that can greatly affect the heavy elements they synthesize in their interiors and subsequently expel into the interstellar medium. These events are caused by the ingestion of highly combustible hydrogen-rich fuel into the convection zone above a helium burning shell in the deeper interior. Although these events are brief, it can take millions of time steps to simulate the dynamics in sufficient detail to capture subtle aspects of the hydrogen ingestion. To address the computational challenge, we exploit modern multicore and many-core processors and also scale the simulations to run efficiently on over 13,000 nodes of NSF’s Blue Waters machine at NCSA.”