Rice Univ. Researchers Claim 15x AI Model Training Speed-up Using CPUs

Reports are circulating in AI circles that researchers from Rice University claim a breakthrough in AI model training acceleration – without using accelerators. Running AI software on commodity x86 CPUs, the Rice computer science team say  neural networks can be trained 15x faster than platforms utilizing GPUs. If valid, the new approach would be a double boon for organizations implementing AI strategies: faster model training using less costly microprocessors.

Scalable and Distributed DNN Training on Modern HPC Systems

DK Panda from Ohio State University gave this talk at the Swiss HPC Conference. “We will provide an overview of interesting trends in DNN design and how cutting-edge hardware architectures are playing a key role in moving the field forward. We will also present an overview of different DNN architectures and DL frameworks. Most DL frameworks started with a single-node/single-GPU design.”