Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Fast.AI Achieves Record ImageNet performance with NVIDIA V100 Tensor Core GPUs

The NVIDIA blog points us to this story on how just completed a new deep learning benchmark milestone. Using NVIDIA V100 GPUs on AWS with PyTorch, the company now has the ability to train ImageNet to 93% accuracy in just 18 minutes. “DIU and will be releasing software to allow anyone to easily train and monitor their own distributed models on AWS, using the best practices developed in this project,” said Jeremy Howard, a founding researcher at “We entered this competition because we wanted to show that you don’t have to have huge resources to be at the cutting edge of AI research, and we were quite successful in doing so.”

Google Cloud TPU Machine Learning Accelerators now in Beta

John Barrus writes that Cloud TPUs are available in beta on Google Cloud Platform to help machine learning experts train and run their ML models more quickly. “Cloud TPUs are a family of Google-designed hardware accelerators that are optimized to speed up and scale up specific ML workloads programmed with TensorFlow. Built with four custom ASICs, each Cloud TPU packs up to 180 teraflops of floating-point performance and 64 GB of high-bandwidth memory onto a single board.