Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios

Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. “In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan.”

Video: Thomas Zacharia from ORNL Testifies at House Hearing on the Need for Supercomputing

In this video, Thomas Zacharia from ORNL testifies before the House Energy and Commerce hearing on DOE Modernization. “At the OLCF, we are deploying a system that may well be the world’s most powerful supercomputer when it begins operating later this year. Summit will be at least five times as powerful as Titan. It will also be an exceptional resource for deep learning, with the potential to address challenging data analytics problems in a number of scientific domains. Summit is among the products of CORAL, the Collaboration of Oak Ridge, Argonne, and Livermore.”

Using the Titan Supercomputer to Accelerate Deep Learning Networks

A team of researchers from the Department of Energy’s Oak Ridge National Laboratory has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops in the generation and training of deep learning networks on the laboratory’s Titan supercomputer.

Adapting Deep Learning to New Data Using ORNL’s Titan Supercomputer

Travis Johnston from ORNL gave this talk at SC17. “Multi-node evolutionary neural networks for deep learning (MENNDL) is an evolutionary approach to performing this search. MENNDL is capable of evolving not only the numeric hyper-parameters, but is also capable of evolving the arrangement of layers within the network. The second approach is implemented using Apache Spark at scale on Titan. The technique we present is an improvement over hyper-parameter sweeps because we don’t require assumptions about independence of parameters and is more computationally feasible than grid-search.”

Video: 25 Years of Supercomputing at Oak Ridge

Since its early days, the OLCF has consistently delivered supercomputers of unprecedented capability to the scientific community on behalf of DOE—contributing to a rapid evolution in scientific computing that has produced a millionfold increase in computing power. This rise has included the launch of the first teraflop system for open science, the science community’s first petaflop system, and two top-ranked machines on the TOP500 list. The next chapter in the OLCF’s legacy is set to begin with the deployment of Summit, a pre-exascale system capable of more than five times the performance of Titan.”

Interview: Piz Daint Supercomputer advances to the #3 position on the TOP500

In this video from PASC17 in Lugano, Michele De Lorenzi from CSCS discussed the recent advancement of Piz Daint supercomputer to the #3 position on the TOP500. After that, he describes the mission of the PASC conference and the location of PASC18 next year.

DOE’s INCITE Program Seeks Advanced Computational Research Proposals for 2018

Today the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program announced it is accepting proposals for high-impact, computationally intensive research campaigns in a broad array of science, engineering, and computer science domains. DOE’s Office of Science plans to award over 6 billion supercomputer processor-hours at Argonne National Laboratory and […]

Podcast: Supercomputing Cancer Research and the Human Brain

In this WUOT podcast, Jack Wells from ORNL describes how the Titan supercomputer helps advance science. “The world’s third-most powerful supercomputer is located in Oak Ridge, and though it bears the imposing name TITAN, its goals and capabilities are more quotidian than dystopian. After that, WUOT’s Megan Jamerson tells us about a project at ORNL that uses TITAN to help humans digest vast sums of information from medical reports. If successful, the project could create new understandings about the demographics of cancer.”

Supercomputing Subatomic Particle Research on Titan

By using multiple grids and separating the modes in the problem onto the various grids most efficiently, the researchers can get through their long line of calculations quicker and easier. “GPUs provide a lot of memory bandwidth,” Clark said. “Solving LQCD problems computationally is almost always memory-bound, so if you can describe your problem in such a way that GPUs can get maximum use of their memory bandwidth, QCD calculations will go a lot quicker.” In other words memory bandwidth is like a roadway in that having more lanes helps keep vehicles moving and lessens the potential for traffic backups.”

Supercomputing Plant Polymers for Biofuels

A huge barrier in converting cellulose polymers to biofuel lies in removing other biomass polymers that subvert this chemical process. To overcome this hurdle, large-scale computational simulations are picking apart lignin, one of those inhibiting polymers, and its interactions with cellulose and other plant components. The results point toward ways to optimize biofuel production and […]