Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Agenda Posted: OpenPOWER 2018 Summit in Las Vegas

The OpenPOWER Summit has posted its speaker agenda. Held in conjunction with IBM Think 2018, the event takes place March 19 in Las Vegas. “The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry.”

Video: Computing Challenges at the Large Hadron Collider

CERN’s Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. “The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. “In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced.”

Radio Free HPC Looks at Diverging Chip Architectures in the Wake of Spectre and Meltdown

In this podcast, the Radio Free HPC team looks at the tradeoff between chip performance and security. In the aftermath of the recently disclosed Spectre and Meltdown exploits, Cryptograpy guru Paul Kocher from Rambus is calling for a divergence in processor architectures:

HACC: Fitting the Universe inside a Supercomputer

Nicholas Frontiere from the University of Chicago gave this talk at the DOE CSGF Program Review meeting. “In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one.”

Job of the Week: HPC Systems Engineer at Washington State University

The Center for Institutional Research Computing at Washington State University is seeking a High-Performance Computing Systems Engineer in our Job of the Week. “This position will play a vital role in the engineering and administration of HPC clusters used by the research community at Washington State University. This position is an exciting opportunity to participate in the frontiers of research computing through the selection, configuration, and management of HPC infrastructure including all computing systems, networking, and storage. This position is key to ensuring the high quality of service and performance of WSU’s research computing resources.”

Video: Deep Reinforcement Learning and Systems Infrastructure at DeepMind

In this video from HiPEAC 2018 in Manchester, Dan Belov from DeepMind describe the company’s machine learning technology and some of the challenges ahead. “DeepMind Inc. is well known for state of the art Deep Reinforcement Learning (DRL) algorithms such as DQN on Atari, A3C on DMLab and AlphaGo Zero. I would like to take you on a tour of challenges we encounter when training DRL agents on large workloads with hundreds of terabytes of data. I’ll talk about why DRL poses unique challenges when designing distributed systems and hardware as opposed to simple supervised learning. Finally I’d like to discuss opportunities for DRL to help systems design and operation.”

Interview: European cHiPSet Event focuses on High-Performance Modeling and Simulation for Big Data Applications

The cHIPSet Annual Plenary Meeting takes place in France next month. To learn more, we caught up with the Vice-Chair for the project, Dr. Horacio González-Vélez, Associate Professor and Head of the Cloud Competency Centre at the National College of Ireland. “The plenary meeting will feature a workshop entitled “Accelerating Modeling and Simulation in the Data Deluge Era”. We are expecting keynote presentations and panel discussions on how the forthcoming exascale systems will influence the analysis and interpretation of data, including the simulation of models, to match observation to theory.”

TACC Podcast Looks at AI and Water Management

In this TACC podcast, Suzanne Pierce from the Texas Advanced Computing Center describes her upcoming panel discussion on AI and water management and the work TACC is doing to support efforts to bridge advanced computing with Earth science. “It’s about letting the AI help us be better decision makers. And it helps us move towards answering, discussing, and exploring the questions that are most important and most critical for our quality of life and our communities so that we can develop a future together that’s brighter.”

Video: Intel and NVIDIA at Congressional Hearing on Artificial Intelligence

In this video, Information Technology Subcommittee Chairman Will Hurd begins a three-part hearing on Artificial Intelligence. “Over the next three months, the IT Subcommittee will hear from industry professionals such as Intel and NVIDIA as well as government stakeholders with the goal of working together to keep the United States the world leader in artificial intelligence technology.”

Updating the SC18 Technical Program to Inspire the Future

In this special guest feature, SC18 Technical Program Chair David Keyes from KAUST writes that important changes are coming to the world’s biggest HPC conference this November in Dallas.