In this video from the HPC User Forum in Tucson, Dag Lohmann from KatRisk presents: Using ORNL Titan to Develop 50,000 Years of Flood Risk Scenarios for the National Flood Insurance Program.
In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. With backgrounds in engineering, hydrology, and risk modeling, the company’s three founders knew that many factors, including annual climate and local infrastructure, affect flood risk. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. But they knew they would need a lot of computing power to reach that level of detail.
That’s when CEO Dag Lohmann sought computing time on the nation’s most powerful supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a US Department of Energy (DOE) Office of Science User Facility at DOE’s Oak Ridge National Laboratory. Through the OLCF’s industrial partnership program, known as Accelerating Competitiveness through Computational Excellence, KatRisk received 5 million processor hours on Titan.
“KatRisk used supercomputing to determine the flood risk for every single building in the United States, essential information used by insurers to price insurance and manage risk,” Lohmann said.
The company leveraged Titan’s GPUs to develop 10 meter by 10 meter resolution flood risk maps for the United States and 90 meter by 90 meter resolution maps, or smaller, worldwide. The KatRisk team focused on combining hydrology models, describing how much water will flow, with computationally intensive hydraulic models that calculate water depth. In this way, the company could predict not only the probability of a flood in a given area but also how bad it might be—an important analysis for the insurance industry.
“Titan helped establish us as one of the leading risk catastrophe modeling companies in the country,” Lohmann said. “These simulations included hydraulic modeling, which is the most time-consuming part of the compute cycle.”
Source: Katie Elyce Jones at ORNL
See more talks at the HPC User Forum Video Gallery * Check out our insideHPC Events Calendar