Video: Data-Centric Parallel Programming

In this slidecast, Torsten Hoefler from ETH Zurich presents: Data-Centric Parallel Programming. “To maintain performance portability in the future, it is imperative to decouple architecture-specific programming paradigms from the underlying scientific computations. We present the Stateful DataFlow multiGraph (SDFG), a data-centric intermediate representation that enables separating code definition from its optimization.”

Call for Papers: EuroMPI Conference in Zurich

The EuroMPI conference has issued its Call for Papers. The event takes place September 10-13 in Zurich, Switzerland. “The EuroMPI conference is since 1994 the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). This includes parallel programming interfaces, libraries and langauges, architectures, networks, algorithms, tools, applications, and High Performance Computing with particular focus on quality, portability, performance and scalability.”

Supercomputing how Fish Save Energy Swimming in Schools

Over at CSCS, Simone Ulmer writes that researchers at ETH Zurich have clarified the previously unresolved question of whether fish save energy by swimming together in schools. They achieved this by simulating the complex physics on the supercomputer ‘Piz Daint’ and combining detailed flow simulations with a reinforcement learning algorithm for the first time. “In their simulations, they have not examined every aspect involved in the efficient swimming behavior of fish. However, it is clear that the developed algorithms and physics learned can be transferred into autonomously swimming or flying robots.”

Video: Weather and Climate Modeling at Convection-Resolving Resolution

David Leutwyler from ETH Zurich gave this talk at the 2017 Chaos Communication Congress. “The representation of thunderstorms (deep convection) and rain showers in climate models represents a major challenge, as this process is usually approximated with semi-empirical parameterizations due to the lack of appropriate computational resolution. Climate simulations using kilometer-scale horizontal resolution allow explicitly resolving deep convection and thus allow for an improved representation of the water cycle. We present a set of such simulations covering Europe and global computational domains.”

Interview: Dr. Christoph Schär on Escaping the Data Avalanche for Climate Modeling

“There are large efforts towards refining the horizontal resolution of climate models to O(1 km) with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement would move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. However, the output volume of climate simulations would dramatically grow, and storing it for later analysis would likely become impractical, due to limited I/O bandwidth and mass-storage capacity. In this presentation we discuss possible solutions to this challenge.”

Panel Discussion: The Exascale Era

In this video from Switzerland HPC Conference, Rich Brueckner from insideHPC moderates a panel discussion on Exascale Computing. “The Exascale Computing Project in the USA is tasked with developing a set of advanced supercomputers with 50x better performance than today’s fastest machines on real applications. This panel discussion will look at the challenges, gaps, and probable pathways forward in this monumental endeavor.”

Panelists:

Gilad Shainer, HPC Advisory Council
Jeffrey Stuecheli, IBM
DK Panda, Ohio State University
Torsten Hoefler, ETH Zurich
Rich Graham, Mellanox

Video: A Hybrid Approach to Strongly Correlated Materials

Matthias Troyer frin ETH Zurich presented this talk at a recent Microsoft Research event. “Given limitations to the scaling for simulating the full Coulomb Hamiltonian on quantum computers, a hybrid approach – deriving effective models from density functional theory codes and solving these effective models by quantum computers seem to be a promising way to proceed for calculating the electronic structure of correlated materials on a quantum computer.”

Reflecting on the Goal and Baseline for Exascale Computing

Thomas Schulthess from CSCS gave this Invited Talk at SC16. “Experience with today’s platforms show that there can be an order of magnitude difference in performance within a given class of numerical methods – depending only on choice of architecture and implementation. This bears the questions on what our baseline is, over which the performance improvements of Exascale systems will be measured. Furthermore, how close will these Exascale systems bring us to deliver on application goals, such as kilometer scale global climate simulations or high-throughput quantum simulations for materials design? We will discuss specific examples from meteorology and materials science.”

Thomas Schulthess to Present Goals and Baselines for Exascale at SC16

Next month at SC16, Dr. Thomas Schulthess from CSCS in Switzerland will present a talk entitled “Reflecting on the Goal and Baseline for Exascale Computing.” The presentation will take place on Wednesday, Nov. 15 at 11:15 am in Salt Palace Ballroom-EFGHIJ.

Creating Balance in HPC on the Piz Daint Supercomputer

The flagship supercomputer at the Swiss National Supercomputing Centre (CSCS), Piz Daint, named after a mountain in the Alps, currently delivers 7.8 petaflops of compute performance, or 7.8 quadrillion mathematical calculations per second. A recently announced upgrade will double its peak performance, thanks to a refresh using the latest Intel Xeon CPUs and 4,500 Nvidia Tesla P100 GPUs.