Call for Submissions: Deep Learning on Supercomputers Workshop at SC19

Print Friendly, PDF & Email

The Deep Learning (DL) on Supercomputers workshop has issued its Call for Submissions. Now in its third year, the workshop will be held with the SC19 conference in Denver on Nov 17.

This third workshop in the Deep Learning on Supercomputers series provides a forum for practitioners working on any and all aspects of DL for scientific research in the High Performance Computing (HPC) context to present their latest research results. The general theme of this workshop series is the intersection of DL and HPC. Its scope encompasses application development in scientific scenarios using HPC platforms; DL methods applied to numerical simulation; fundamental algorithms, enhanced procedures, and software development methods to enable scalable training and inference; hardware changes with impact on future supercomputer design; and machine deployment, performance evaluation, and reproducibility practices for DL applications, with an emphasis on scientific usage.

Topics include but are not limited to:

  • Emerging scientific applications driven by DL methods
  • Novel interactions between DL and traditional numerical simulation
  • Effectiveness and limitations of DL methods in scientific research
  • Algorithms and procedures to enhance reproducibility of scientific DL applications
  • Data management through the life cycle of scientific DL applications
  • General algorithms and procedures for efficient and scalable DL training
  • General algorithms and systems for large scale model serving for scientific use cases
  • New software, and enhancements to existing software, for scalable DL
  • DL communication optimization at scale
  • I/O optimization for DL at scale
  • Hardware (processors, accelerators, memory hierarchy, interconnect) changes with impact on deep learning in the HPC context
  • DL performance evaluation and analysis on deployed systems
  • DL performance modeling and tuning of DL on supercomputers
  • DL benchmarks on supercomputers

As part of the reproducibility initiative, the workshop requires authors to provide information such as the algorithms, software releases, datasets, and hardware configurations used. For performance evaluation studies, we will encourage authors to use well-known benchmarks or applications with open accessible datasets: for example, MLPerf and ResNet-50 with the ImageNet-1K dataset.

Submissions are due Sept. 1, 2019.

For questions, please contact (

Check out our insideHPC Events Calendar