Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Converging HPC, Big Data, and AI at the Tokyo Institute of Technology

Satoshi Matsuoka from the Tokyo Institute of Technology gave this talk at the NVIDIA booth at SC17. “TSUBAME3 embodies various BYTES-oriented features to allow for HPC to BD/AI convergence at scale, including significant scalable horizontal bandwidth as well as support for deep memory hierarchy and capacity, along with high flops in low precision arithmetic for deep learning.”

Fujitsu to Build 37 Petaflop AI Supercomputer for AIST in Japan

Nikkei in Japan reports that Fujitsu is building a 37 Petaflop supercomputer for the National Institute of Advanced Industrial Science and Technology (AIST). “Targeted at Deep Learning workloads, the machine will power the AI research center at the University of Tokyo’s Chiba Prefecture campus. The new Fujitsu system feature will comprise 1,088 servers, 2,176 Intel Xeon processors, and 4,352 NVIDIA GPUs.”

Radio Free HPC Reviews the SC16 Student Cluster Competition Configurations & Results

In this podcast, the Radio Free HPC team reviews the results from SC16 Student Cluster Competition. “This year, the advent of clusters with the new Nvidia Tesla P100 GPUs made a huge impact, nearly tripling the Linpack record for the competition. For the first-time ever, the team that won top honors also won the award for achieving highest performance for the Linpack benchmark application. The team “SwanGeese” is from the University of Science and Technology of China. In traditional Chinese culture, the rare Swan Goose stands for teamwork, perseverance and bravery.”