Shinichiro Takizawa from AIST gave this talk at the MVAPICH User Group. “ABCI is the world’s first large-scale Open AI Computing Infrastructure, constructed and operated by AIST, Japan. It delivers 19.9 petaflops of HPL performance and world’ fastest training time of 1.17 minutes in ResNet-50 training on ImageNet datasets as of July 2019. In this talk, we focus on ABCI’s network architecture and communication libraries available on ABCI and shows their performance and recent research achievements.”
Converging HPC, Big Data, and AI at the Tokyo Institute of Technology
Satoshi Matsuoka from the Tokyo Institute of Technology gave this talk at the NVIDIA booth at SC17. “TSUBAME3 embodies various BYTES-oriented features to allow for HPC to BD/AI convergence at scale, including significant scalable horizontal bandwidth as well as support for deep memory hierarchy and capacity, along with high flops in low precision arithmetic for deep learning.”
Fujitsu to Build 37 Petaflop AI Supercomputer for AIST in Japan
Nikkei in Japan reports that Fujitsu is building a 37 Petaflop supercomputer for the National Institute of Advanced Industrial Science and Technology (AIST). “Targeted at Deep Learning workloads, the machine will power the AI research center at the University of Tokyo’s Chiba Prefecture campus. The new Fujitsu system feature will comprise 1,088 servers, 2,176 Intel Xeon processors, and 4,352 NVIDIA GPUs.”