Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Supercomputing Galactic Winds with Cholla

Using the Titan supercomputer at Oak Ridge National Laboratory, a team of astrophysicists created a set of galactic wind simulations of the highest resolution ever performed. The simulations will allow researchers to gather and interpret more accurate, detailed data that elucidates how galactic winds affect the formation and evolution of galaxies.

AI Approach Points to Bright Future for Fusion Energy

Researchers are using Deep Learning techniques on DOE supercomputers to help develop fusion energy. “Unlike classical machine learning methods, FRNN—the first deep learning code applied to disruption prediction—can analyze data with many different variables such as the plasma current, temperature, and density. Using a combination of recurrent neural networks and convolutional neural networks, FRNN observes thousands of experimental runs called “shots,” both those that led to disruptions and those that did not, to determine which factors cause disruptions.”

Podcast: Quantum Applications are Always Hybrid

In this podcast, the Radio Free HPC team looks at inherently hybrid nature of quantum computing applications. “If you’re always going to have to mix classical code with quantum code then you need an environment that is built for that workflow, and thus we see a lot of attention given to that in the QIS (Quantum Information Science) area. This is reminiscent of OpenGL for graphics accelerators and OpenCL/CUDA for compute accelerators.”

Case Study: Supercomputing Natural Gas Turbine Generators for Huge Boosts in Efficiency

Hyperion Research has published a new case study on how General Electric engineers were able to nearly double the efficiency of gas turbines with the help of supercomputing simulation. “With these advanced modeling and simulation capabilities, GE was able to replicate previously observed combustion instabilities. Following that validation, GE Power engineers then used the tools to design improvements in the latest generation of heavy-duty gas turbine generators to be delivered to utilities in 2017. These turbine generators, when combined with a steam cycle, provided the ability to convert an amazing 64% of the energy value of the fuel into electricity, far superior to the traditional 33% to 44%.”

Understanding Behaviors in the Extreme Environment of Natural Gas Turbine Generators

“With these advanced modeling and simulation capabilities, GE was able to replicate previously observed combustion instabilities. Following that validation, GE Power engineers then used the tools to design improvements in the latest generation of heavy-duty gas turbine generators to be delivered to utilities in 2017. These turbine generators, when combined with a steam cycle, provided the ability to convert an amazing 64% of the energy value of the fuel into electricity, far superior to the traditional 33% to 44%.”

Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios

Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. “In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan.”

Video: Thomas Zacharia from ORNL Testifies at House Hearing on the Need for Supercomputing

In this video, Thomas Zacharia from ORNL testifies before the House Energy and Commerce hearing on DOE Modernization. “At the OLCF, we are deploying a system that may well be the world’s most powerful supercomputer when it begins operating later this year. Summit will be at least five times as powerful as Titan. It will also be an exceptional resource for deep learning, with the potential to address challenging data analytics problems in a number of scientific domains. Summit is among the products of CORAL, the Collaboration of Oak Ridge, Argonne, and Livermore.”

Using the Titan Supercomputer to Accelerate Deep Learning Networks

A team of researchers from the Department of Energy’s Oak Ridge National Laboratory has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops in the generation and training of deep learning networks on the laboratory’s Titan supercomputer.

Adapting Deep Learning to New Data Using ORNL’s Titan Supercomputer

Travis Johnston from ORNL gave this talk at SC17. “Multi-node evolutionary neural networks for deep learning (MENNDL) is an evolutionary approach to performing this search. MENNDL is capable of evolving not only the numeric hyper-parameters, but is also capable of evolving the arrangement of layers within the network. The second approach is implemented using Apache Spark at scale on Titan. The technique we present is an improvement over hyper-parameter sweeps because we don’t require assumptions about independence of parameters and is more computationally feasible than grid-search.”

Video: 25 Years of Supercomputing at Oak Ridge

Since its early days, the OLCF has consistently delivered supercomputers of unprecedented capability to the scientific community on behalf of DOE—contributing to a rapid evolution in scientific computing that has produced a millionfold increase in computing power. This rise has included the launch of the first teraflop system for open science, the science community’s first petaflop system, and two top-ranked machines on the TOP500 list. The next chapter in the OLCF’s legacy is set to begin with the deployment of Summit, a pre-exascale system capable of more than five times the performance of Titan.”