Adapting Deep Learning to New Data Using ORNL’s Titan Supercomputer

Print Friendly, PDF & Email

In this video from the NVIDIA booth at SC17, Travis Johnston from ORNL presents: Adapting Deep Learning to New Data Using ORNL’s Titan Supercomputer.

“There has been a surge of success in using deep learning as it has provided a new state of the art for a variety of domains. While these models learn their parameters through data-driven methods, model selection through hyper-parameter choices remains a tedious and highly intuition-driven task. We’ve developed two approaches to address this problem. Multi-node evolutionary neural networks for deep learning (MENNDL) is an evolutionary approach to performing this search. MENNDL is capable of evolving not only the numeric hyper-parameters, but is also capable of evolving the arrangement of layers within the network. The second approach is implemented using Apache Spark at scale on Titan. The technique we present is an improvement over hyper-parameter sweeps because we don’t require assumptions about independence of parameters and is more computationally feasible than grid-search.”

Travis Johnston is a research associate with the Computational Data Analytics group at Oak Ridge National Laboratory. Travis earned his Ph.D. in mathematics from the University of South Carolina in 2014. Since then, he as focused on machine learning and high performance computing working in the Global Computing lab at the University of Delaware.

See our complete coverage of SC17

Check out our insideHPC Events Calendar