Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Using the Titan Supercomputer to Accelerate Deep Learning Networks

A team of researchers from the Department of Energy’s Oak Ridge National Laboratory has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops in the generation and training of deep learning networks on the laboratory’s Titan supercomputer.

Adapting Deep Learning to New Data Using ORNL’s Titan Supercomputer

Travis Johnston from ORNL gave this talk at SC17. “Multi-node evolutionary neural networks for deep learning (MENNDL) is an evolutionary approach to performing this search. MENNDL is capable of evolving not only the numeric hyper-parameters, but is also capable of evolving the arrangement of layers within the network. The second approach is implemented using Apache Spark at scale on Titan. The technique we present is an improvement over hyper-parameter sweeps because we don’t require assumptions about independence of parameters and is more computationally feasible than grid-search.”