Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: The Legion Programming Model

“Developed by Stanford University, Legion is a data-centric programming model for writing high-performance applications for distributed heterogeneous architectures. Legion provides a common framework for implementing applications which can achieve portable performance across a range of architectures. The target class of users dictates that productivity in Legion will always be a second-class design constraint behind performance. Instead Legion is designed to be extensible and to support higher-level productivity languages and libraries.”

Video: System Interconnects for HPC

In this video from the 2017 Argonne Training Program on Extreme-Scale Computing, Pavan Balaji from Argonne presents an overview of system interconnects for HPC. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

HPC Connects with Smart Cities at SC17

In this video from the SC17 HPC Connects series, Pete Beckman and Charlie Catlett from Argonne describe how the Smart Cities initiative to improve the quality of life for residents using HPC, urban informatics, and other technologies to improve the efficiency of services. “Smart Cities will be the topic of the SC17 plenary session, which kicks off the conference at 5:30pm on Monday, Nov. 13 in the Colorado Convention Center.”

Video: Evolution of MATLAB

Cleve Moler from MathWorks gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include: Data analysis, exploration, and visualization.”

Video: Silicon Photonics for Extreme Computing

Keren Bergman from Columbia University gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Exaflop machines would represent a thousand-fold improvement over the current standard, the petaflop machines that first came on line in 2008. But while exaflop computers already appear on funders’ technology roadmaps, making the exaflop leap on the short timescales of those roadmaps constitutes a formidable challenge.”

HPC I/O for Computational Scientists

Phil Carns from Argonne gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Darshan is a scalable HPC I/O characterization tool. It captures an accurate but concise picture of application I/O behavior with minimum overhead. Darshan was originally developed on the IBM Blue Gene series of computers deployed at the Argonne Leadership Computing Facility, but it is portable across a wide variety of platforms include the Cray XE6, Cray XC30, and Linux clusters.  Darshan routinely instruments jobs using up to 786,432 compute cores on the Mira system at ALCF.”

Video: NASA Advanced Computing Environment for Science & Engineering

Rupak Biswas from NASA gave this talk at the Argonne Training Program on Extreme-Scale Computing. “High performance computing is now integral to NASA’s portfolio of missions to pioneer the future of space exploration, accelerate scientific discovery, and enable aeronautics research. Anchored by the Pleiades supercomputer at NASA Ames Research Center, the High End Computing Capability (HECC) Project provides a fully integrated environment to satisfy NASA’s diverse modeling, simulation, and analysis needs.”

Video: Revolution in Computer and Data-enabled Science and Engineering

Ed Seidel from the University of Illinois gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. The theme of his talk centers around the need for interdisciplinary research. “Interdisciplinary research (IDR) is a mode of research by teams or individuals that integrates information, data, techniques, tools, perspectives, concepts, and/or theories from two or more disciplines or bodies of specialized knowledge to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline or area of research practice.”

Future HPC Leaders Gather at Argonne Training Program on Extreme-Scale Computing

Over at ALCF, Andrea Manning writes that the recent Argonne Training Program on Extreme-Scale Computing brought together HPC practitioners from around the world. “You can’t get this material out of a textbook,” said Eric Nielsen, a research scientist at NASA’s Langley Research Center. Added Johann Dahm of IBM Research, “I haven’t had this material presented to me in this sort of way ever.”

Video: Argonne’s Theta Supercomputer Architecture

Scott Parker gave this talk at the Argonne Training Program on Extreme-Scale Computing. “Designed in collaboration with Intel and Cray, Theta is a 9.65-petaflops system based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta will enable researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.”