Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Accelerating High-Resolution Weather Models with Deep-Learning Hardware

Sam Hatfield from the University of Oxford gave this talk at the PASC19 conference. “In this paper, we investigate the use of mixed-precision hardware that supports floating-point operations at double-, single- and half-precision. In particular, we investigate the potential use of the NVIDIA Tensor Core, a mixed-precision matrix-matrix multiplier mainly developed for use in deep learning, to accelerate the calculation of the Legendre transforms in the Integrated Forecasting System (IFS), one of the leading global weather forecast models.”

Summit Supercomputer Triples Performance Record on new HPL-AI Benchmark

“Using HPL-AI, a new approach to benchmarking AI supercomputers, ORNL’s Summit system has achieved unprecedented performance levels of 445 petaflops or nearly half an exaflops. That compares with the system’s official performance of 148 petaflops announced in the new TOP500 list of the world’s fastest supercomputers.”

Radio Free HPC Looks at the coming wave of 40+ Different AI Chips

In this podcast, the Radio Free HPC Team asks, “What are we going to do with 40+ AI chips?” One such chip, Graphcore, is touted as “the most complex processor” ever at some 20 billion transistors. The VC-backed company out of Bristol, UK is also valued on paper at $1.7b, gaining it the coveted “unicorn” status, apparently the “only western semi-conductor unicorn.”

Achieving ExaOps with the CoMet Comparative Genomics Application

Wayne Joubert’s talk at the HPC User Forum described how researchers at the US Department of Energy’s Oak Ridge National Laboratory (ORNL) achieved a record throughput of 1.88 ExaOps on the CoMet algorithm. As the first science application to run at the exascale level, CoMet achieved this remarkable speed analyzing genomic data on the recently launched Summit supercomputer.