Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Earth-modeling System steps up to Exascale

“Unveiled today by the DOE, E3SM is a state-of-the-science modeling project that uses the world’s fastest computers to more accurately understand how Earth’s climate work and can evolve into the future. The goal: to support DOE’s mission to plan for robust, efficient, and cost-effective energy infrastructures now, and into the distant future.”

Quantum Computing at NIST

Carl Williams from NIST gave this talk at the HPC User Forum in Tucson. “Quantum information science research at NIST explores ways to employ phenomena exclusive to the quantum world to measure, encode and process information for useful purposes, from powerful data encryption to computers that could solve problems intractable with classical computers.”

Radio Free HPC Looks at the New Coral-2 RFP for Exascale Computers

In this podcast, the Radio Free HPC team looks at the new Department of Energy’s RFP for Exascale Computers. “As far as predictions go, Dan thinks one machine will go to IBM and the other will go to Intel. Rich thinks HPE will win one of the bids with an ARM-based system designed around The Machine memory-centric architecture. They have a wager, so listen in to find out where the smart money is.”

Containers Using Singularity on HPC

Abhinav Thota, from Indiana University gave this talk at the 2018 Swiss HPC Conference. “Container use is becoming more widespread in the HPC field. There are various reasons for this, including the broadening of the user base and applications of HPC. One of the popular container tools on HPC is Singularity, an open source project coming out of the Berkeley Lab. In this talk, we will introduce Singularity, discuss how users of Indiana University are using it and share our experience supporting it. This talk will include a brief demonstration as well.”

Call for Submissions: SC18 Workshop on Reproducibility

Over at the SC18 Blog, Stephen Lien Harrell from Purdue writes that the conference will host will host a workshop on the hot topic of Reproducibility. Their Call for Submissions is out with a deadline of August 19, 2018. ”
The Systems Professionals Workshop is a platform for discussing the unique challenges and developing the state of the practice for the HPC systems community. The program committee is soliciting submissions that address the best practices of building and operating high performance systems with an emphasis on reproducible solutions that can be implemented by systems staff at other institutions.”

Job of the Week: HPC System Administrator at D.E. Shaw Research

D.E. Shaw Research is seeking an HPC System Administrator in our Job of the Week. “Our research effort is aimed at achieving major scientific advances in the field of biochemistry and fundamentally transforming the process of drug discovery.”

Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios

Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. “In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan.”

Fujitsu Upgrades RAIDEN at RIKEN Center for Advanced Intelligence Project

Fujitsu reports that the company has significantly boosted the performance of the RAIDEN supercompuer. RAIDEN is a computer system for artificial intelligence research originally deployed in 2017 at the RIKEN Center for Advanced Intelligence Project (AIP Center). “The upgraded RAIDEN has increased its performance by a considerable margin, moving from an initial total theoretical computational performance of 4 AI Petaflops to 54 AI Petaflops, placing it in the top tier of Japan’s systems. In having built this system, Fujitsu demonstrates its commitment to support cutting-edge AI research in Japan.”

Why UIUC Built HPC Application Containers for NVIDIA GPU Cloud

In this video from the GPU Technology Conference, John Stone from the University of Illinois describes how container technology in the NVIDIA GPU Cloud help the University distribute accelerated applications for science and engineering. “Containers are a way of packaging up an application and all of its dependencies in such a way that you can install them collectively on a cloud instance or a workstation or a compute node. And it doesn’t require the typical amount of system administration skills and involvement to put one of these containers on a machine.”

Video: HPC Use for Earthquake Research

Christine Goulet from the Southern California Earthquake Center gave this talk at the HPC User Forum in Tucson. “SCEC coordinates fundamental research on earthquake processes using Southern California as its principal natural laboratory. The SCEC community advances earthquake system science through synthesizing knowledge of earthquake phenomena through physics-based modeling, including system-level hazard modeling and communicating our understanding of seismic hazards to reduce earthquake risk and promote community resilience.”