Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Case Study: Supercomputing Natural Gas Turbine Generators for Huge Boosts in Efficiency

Hyperion Research has published a new case study on how General Electric engineers were able to nearly double the efficiency of gas turbines with the help of supercomputing simulation. “With these advanced modeling and simulation capabilities, GE was able to replicate previously observed combustion instabilities. Following that validation, GE Power engineers then used the tools to design improvements in the latest generation of heavy-duty gas turbine generators to be delivered to utilities in 2017. These turbine generators, when combined with a steam cycle, provided the ability to convert an amazing 64% of the energy value of the fuel into electricity, far superior to the traditional 33% to 44%.”

Understanding Behaviors in the Extreme Environment of Natural Gas Turbine Generators

“With these advanced modeling and simulation capabilities, GE was able to replicate previously observed combustion instabilities. Following that validation, GE Power engineers then used the tools to design improvements in the latest generation of heavy-duty gas turbine generators to be delivered to utilities in 2017. These turbine generators, when combined with a steam cycle, provided the ability to convert an amazing 64% of the energy value of the fuel into electricity, far superior to the traditional 33% to 44%.”

Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios

Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. “In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan.”

Video: Thomas Zacharia from ORNL Testifies at House Hearing on the Need for Supercomputing

In this video, Thomas Zacharia from ORNL testifies before the House Energy and Commerce hearing on DOE Modernization. “At the OLCF, we are deploying a system that may well be the world’s most powerful supercomputer when it begins operating later this year. Summit will be at least five times as powerful as Titan. It will also be an exceptional resource for deep learning, with the potential to address challenging data analytics problems in a number of scientific domains. Summit is among the products of CORAL, the Collaboration of Oak Ridge, Argonne, and Livermore.”

Using the Titan Supercomputer to Accelerate Deep Learning Networks

A team of researchers from the Department of Energy’s Oak Ridge National Laboratory has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops in the generation and training of deep learning networks on the laboratory’s Titan supercomputer.

Adapting Deep Learning to New Data Using ORNL’s Titan Supercomputer

Travis Johnston from ORNL gave this talk at SC17. “Multi-node evolutionary neural networks for deep learning (MENNDL) is an evolutionary approach to performing this search. MENNDL is capable of evolving not only the numeric hyper-parameters, but is also capable of evolving the arrangement of layers within the network. The second approach is implemented using Apache Spark at scale on Titan. The technique we present is an improvement over hyper-parameter sweeps because we don’t require assumptions about independence of parameters and is more computationally feasible than grid-search.”

Video: 25 Years of Supercomputing at Oak Ridge

Since its early days, the OLCF has consistently delivered supercomputers of unprecedented capability to the scientific community on behalf of DOE—contributing to a rapid evolution in scientific computing that has produced a millionfold increase in computing power. This rise has included the launch of the first teraflop system for open science, the science community’s first petaflop system, and two top-ranked machines on the TOP500 list. The next chapter in the OLCF’s legacy is set to begin with the deployment of Summit, a pre-exascale system capable of more than five times the performance of Titan.”

Interview: Piz Daint Supercomputer advances to the #3 position on the TOP500

In this video from PASC17 in Lugano, Michele De Lorenzi from CSCS discussed the recent advancement of Piz Daint supercomputer to the #3 position on the TOP500. After that, he describes the mission of the PASC conference and the location of PASC18 next year.

DOE’s INCITE Program Seeks Advanced Computational Research Proposals for 2018

Today the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program announced it is accepting proposals for high-impact, computationally intensive research campaigns in a broad array of science, engineering, and computer science domains. DOE’s Office of Science plans to award over 6 billion supercomputer processor-hours at Argonne National Laboratory and […]

Podcast: Supercomputing Cancer Research and the Human Brain

In this WUOT podcast, Jack Wells from ORNL describes how the Titan supercomputer helps advance science. “The world’s third-most powerful supercomputer is located in Oak Ridge, and though it bears the imposing name TITAN, its goals and capabilities are more quotidian than dystopian. After that, WUOT’s Megan Jamerson tells us about a project at ORNL that uses TITAN to help humans digest vast sums of information from medical reports. If successful, the project could create new understandings about the demographics of cancer.”