Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: HPC for Instabilities in Aerospace Propulsion Systems

Thierry Poinsot from Toulouse Fluid Mechanics Institute gave this talk at PASC19. “This talk focuses on aerospace propulsion where optimization often leads to the occurrence of instabilities where combustion couples with acoustics, leading to unacceptable oscillations (the most famous example is the Apollo engine which required 1330 full scale tests to reach acceptable oscillation levels). The talk will show how simulation is used to control these problems, in real gas turbine engines and in rocket engines.”

European Leadership in HPC, Big Data, and AI in Weather and Climate Prediction

Dr. Peter Bauer from ECMWF gave this talk at the HiPEAC’s Computing Systems Week in Edinburgh. “Meeting the future requirements for forecast reliability and timeliness needs 100-1000 times bigger high-performance computing and data management resources than today – towards what’s generally called ‘exascale’. To meet these needs, the weather and climate prediction community is undergoing one of its biggest revolutions since its foundation in the early 20th century.”

Altair HyperWorks 2019 product development platform Speeds Time-to-Market

Today Altair released HyperWorks 2019, the latest version of its simulation- and AI-driven product development platform. “Our development focus for HyperWorks 2019 was to increase solve speed and functionality across our solutions for every stage of product development with optimization and multi-physics workflows for all manufacturing methods.”

ORNL to lead INFUSE Network for Fusion Energy Program

The Department of Energy has established the Innovation Network for Fusion Energy program, or INFUSE, to encourage private-public research partnerships for overcoming challenges in fusion energy development. “Researchers and scientists in the Department of Energy are developing new tools to predict the performance, reliability and economics of fusion reactor concepts.”

Video: Supercomputing Dynamic Earthquake Ruptures

Researchers are using XSEDE supercomputers to model multi-fault earthquakes in the Brawley fault zone, which links the San Andreas and Imperial faults in Southern California. Their work could predict the behavior of earthquakes that could potentially affect millions of people’s lives and property. “Basically, we generate a virtual world where we create different types of earthquakes. That helps us understand how earthquakes in the real world are happening.”

Advancing Fusion Science with CGYRO using GPU-based Leadership Systems

Jeff Candy and Igor Sfiligoi from General Atomics gave this talk at the GPU Technology Conference. “Gyrokinetic simulations are one of the most useful tools for understanding fusion science. We’ll explain how we designed and implemented CGYRO to make good use of the tens of thousands of GPUs on such systems, which provide simulations that bring us closer to fusion as an abundant clean energy source. We’ll also share benchmarking results of both CPU- and GPU-Based systems.”

Video: Multi-GPU FFT Performance on Different Hardware Configurations

Kevin Roe from the Maui High Performance Computing Center gave this talk at the GPU Technology Conference. “We will characterize the performance of multi-GPU systems in an effort to determine their viability for running physics-based applications using Fast Fourier Transforms (FFTs). Additionally, we’ll discuss how multi-GPU FFTs allow available memory to exceed the limits of a single GPU and how they can reduce computational time for larger problem sizes.”

Epic HPC Road Trip Continues to NREL

In this special guest feature, Dan Olds from OrionX continues his Epic HPC Road Trip series with a stop at NREL in Golden, Colorado. “When it comes to energy efficient computing, NREL has to be one of the most advanced facilities in the world. It’s the first data center I’ve seen where their current PUE is shown on a LCD panel outside the door. When I was visiting, the PUE of the Day was 1.027 – which is incredibly low.”

TACC Powers Climate Studies with GRACE Project

Researchers are using powerful supercomputers at TACC to process data from Gravity Recovery and Climate Experiment (GRACE). “Intended to last just five years in orbit for a limited, experimental mission to measure small changes in the Earth’s gravitational fields, GRACE operated for more than 15 years and provided unprecedented insight into our global water resources, from more accurate measurements of polar ice loss to a better view of the ocean currents, and the rise in global sea levels.”

Video: Exascale Deep Learning for Climate Analytics

Thorsten Kurth Josh Romero gave this talk at the GPU Technology Conference. “We’ll discuss how we scaled the training of a single deep learning model to 27,360 V100 GPUs (4,560 nodes) on the OLCF Summit HPC System using the high-productivity TensorFlow framework. This talk is targeted at deep learning practitioners who are interested in learning what optimizations are necessary for training their models efficiently at massive scale.”