Leadership Computing for Europe and the Path to Exascale Computing

Thomas Schulthess from CSCS gave this talk at the GPU Technology Conference. “With over 5000 GPU-accelerated nodes, Piz Daint has been Europes leading supercomputing systems since 2013, and is currently one of the most performant and energy efficient supercomputers on the planet. It has been designed to optimize throughput of multiple applications, covering all aspects of the workflow, including data analysis and visualisation. We will discuss ongoing efforts to further integrate these extreme-scale compute and data services with infrastructure services of the cloud. As Tier-0 systems of PRACE, Piz Daint is accessible to all scientists in Europe and worldwide. It provides a baseline for future development of exascale computing.”

ReFrame: A Regression Testing Framework Enabling Continuous Integration of Large HPC Systems

“Scientifico ReFrame is a new framework for writing regression tests for HPC systems. The goal of the framework is to abstract away the complexity of the interactions with the system, separating the logic of a regression test from the low-level details, which pertain to the system configuration and setup. This allows users to write easily portable regression tests, focusing only on the functionality. The purpose of the tutorial will be to do a live demo of ReFrame and a hands-on session demonstrating how to configure it and how to use it.”

Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software

Gilles Fourestey from EPFL gave this talk at the Swiss HPC Conference. “LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature.”

Video: Piz Daint Supercomputer speeds PRACE simulations in Europe

In this video, the European PRACE HPC initiative describes how the Piz Daint supercomputer at CSCS in Switzerland provides world-class supercomputing power for research. “We are very pleased that Switzerland – one of our long-time partners in high-performance computing – is joining the European effort to develop supercomputers in Europe,” said Mariya Gabriel, Commissioner for Digital Economy and Society. “This will enhance Europe’s leadership in science and innovation, help grow the economy and build our industrial competitiveness.”

Preliminary Agenda Posted for HPC Advisory Council Swiss Conference

The HPC Advisory Council has posted their meeting agenda for their Swiss Conference. Held in conjunction with HPCXXL, the event takes place April 9-12 in Lugano, Switzerland. “Delve into a wide range of interests, disciplines and topics in HPC – from present day application to its future potential. Join the Centro Svizzero di Calcolo Scientifico (CSCS), HPC Advisory Council members and colleagues from around the world for invited and contributed talks and immersive tutorials at the ninth annual Swiss Conference! Knowledgeable evaluations, prescriptive best practices and provocative insights, the open forum conference brings together industry experts for three days of highly interactive sessions.”

Deep Learning and Automatic Differentiation from Theano to PyTorch

Inquisitive minds want to know what causes the universe to expand, how M-theory binds the smallest of the small particles or how social dynamics can lead to revolutions. “The way that statisticians answer these questions is with Approximate Bayesian Computation (ABC), which we learn on the first day of the summer school and which we combine with High Performance Computing. The second day focuses on a popular machine learning approach ‘Deep-learning’ which mimics the deep neural network structure in our brain, in order to predict complex phenomena of nature.”

SC17 Panel: Energy Efficiency Gains From Software

In this video from SC17 in Denver, Dan Reed moderates a panel discussion on HPC Software for Energy Efficiency. “This panel will explore what HPC software capabilities were most helpful over the past years in improving HPC system energy efficiency? It will then look forward; asking in what layers of the software stack should a priority be put on introducing energy-awareness; e.g., runtime, scheduling, applications? What is needed moving forward? Who is responsible for that forward momentum?”

Searching for Human Brain Memory Molecules with the Piz Daint Supercomputer

Scientists at the University of Basel are using the Piz Daint supercomputer at CSCS to discover interrelationships in the human genome that might simplify the search for “memory molecules” and eventually lead to more effective medical treatment for people with diseases that are accompanied by memory disturbance. “Until now, searching for genes related to memory capacity has been comparable to seeking out the proverbial needle in a haystack.”

PRACE Awards 1.7 Thousand Million Core Hours for Research Projects in Europe

Today the European PRACE initiative announced that 46 Awards from their recent 15th Call for Proposals total up to nearly 1.7 thousand million core hours. The 46 awarded projects are led by principal investigators from 12 different European countries. “Of local interest this time around, the awarded projects involve co-investigators from the USA (7) and Russia (2). All information and the abstracts of the projects awarded under the 15th PRACE Call for Proposals are now available online.”

SC17 Panel Preview: How Serious Are We About the Convergence Between HPC and Big Data?

SC17 will feature a panel discussion entitled How Serious Are We About the Convergence Between HPC and Big Data? “The possible convergence between the third and fourth paradigms confronts the scientific community with both a daunting challenge and a unique opportunity. The challenge resides in the requirement to support both heterogeneous workloads with the same hardware architecture. The opportunity lies in creating a common software stack to accommodate the requirements of scientific simulations and big data applications productively while maximizing performance and throughput.