Podcast: Satoshi Matsuoka on the Convergence of HPC and Big Data

Print Friendly, PDF & Email
0:00

satoshiIn this podcast from the 2015 NCSA Blue Waters Symposium, Satoshi Matsoka describes the convergence of HPC, Big Data, and the Cloud as a delivery model.

Rapid growth in the use cases and demands for extreme computing and huge data processing is leading to convergence of the two infrastructures. Tokyo Tech’s TSUBAME3.0, a 2016 successor to the highly successful TSUBAME2/2.5, will aim to deploy a series of innovative technologies, including ultra-efficient liquid cooling and power control, petabytes of non-volatile memory, as well as low cost Petabit-class interconnect. In particular our Extreme Big Data (EBD) project is looking at co-design development of convergent system stack given future data and computing workloads. The resulting TSUBAME3 and machines beyond will be an integral part of our national SC/Big Data infrastructure called HPCI (High Performance Computing Infrastructure of Japan), which is similar to XSEDE in scale with about 40 Petaflops of aggregate computing capabilities circa 2015 and expected to embody half-exaflop by 2022. The trend towards convergence is not only strategic however but rather inevitable as the Moore’s law ends such that sustained growth in data capabilities, not compute, will advance the capacity and thus the overall capacities towards accelerating research and ultimately the industry.

Download the MP3 * Sign up for our insideHPC Newsletter.