Video: TensorFlow for HPC?

Print Friendly, PDF & Email

Peter Braam

In this podcast, Peter Braam looks at how TensorFlow framework could be used to accelerate high performance computing.

Google has developed TensorFlow, a truly complete platform for ML. The performance of the platform is amazing, and it begs the question if it will be useful for HPC in a similar manner that GPU’s heralded a revolution.

Google’s Tensor Processing Unit module with four TPUs

As described in his talk at the CHPC 2018 Conference in South Africa, TensorFlow contains many ingredients, for example:

  • many domain specific libraries for machine learning
  • the TensorFlow domain specific data-flow language
  • carefully organized input and output for data flow
  • an optimizing runtime and compiler
  • hardware implementations of TensorFlow operations in TensorFlow processing unit (TPU) chips

Peter Braam started with an academic career in mathematics and computing at Oxford University and Carnegie Mellon University, prior to starting 6 startup companies (of which 4 were successful).  Peter held senior executive positions after his startups were acquired.  His best-known project is the Lustre file system which powers the majority of high-end HPC systems, a derivative of which is the ext4 system running on virtually all Linux systems. Since 2013,  Peter has worked with the Cavendish Laboratory in Cambridge, supporting the SKA telescope project. He is currently a Visiting Professor of Physics at Oxford, and a Visiting Scholar at the Flatiron Institute’s Center for Computational Astrophysics.  He continues to explore new software approaches for data-intensive computing, leveraging computer science, and some mathematics and machine learning approaches.

Download the MP3 * Subscribe on iTunes * Subscribe to RSS 

Sign up for our insideHPC Newsletter


  1. jeff Layton says

    What happened to Haskell? I thought that was the key to all of HPC. Is it now TF? Perhaps re-writing TF in Haskell would accelerate HPC?