Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Samsung, IBM, Tencent Lead AI Patent Race, Europe Lags

Three companies – Samsung, IBM and Tencent – dominate the global AI patent race over the past 10 years, while fierce competition between the U.S, and China overshadows other countries and regions, including the EU. These are the key findings of OxFirst, a specialist in IP law and economics (and spin out of Oxford University), […]

Intel, NSF Name Winners of Wireless Machine Learning Research Funding

Intel and the National Science Foundation (NSF), joint funders of the Machine Learning for Wireless Networking Systems (MLWiNS) program, today announced recipients of awards for research projects into ultra-dense wireless systems that deliver the throughput, latency and reliability requirements of future applications – including distributed machine learning computations over wireless edge networks. Here are the […]

ARM-based Fugaku Supercomputer on Summit of New Top500 – Surpasses Exaflops on AI Benchmark

The new no. 1 system on the updated  ranking of the TOP500 list of the world’s most powerful supercomputers, released this morning, is Fugaku, a machine built at the Riken Center for Computational Science in Kobe, Japan. The new top system turned in a High Performance LINPACK (HPL) result of 415.5 petaflops (nearly half an exascale), outperforming Summit, the former no. 1 system housed at the U.S. Dept. of Energy’s Oak Ridge National Lab, by a factor of 2.8x. Fugaku, powered by Fujitsu’s 48-core A64FX SoC, is the first ARM-based system to take the TOP500 top spot.

NetApp Deploys Iguazio’s Data Science Platform for Optimized Storage Management

Previously built on Hadoop, NetApp said it was also looking to modernize the service infrastructure “to reduce the complexities of deploying new AI services and the costs of running large-scale analytics. In addition, the shift was needed to enable real-time predictive AI, and to abstract deployment, allowing the technology to run on multi-cloud or on premises seamlessly.”

The true cost of AI innovation

“As the world’s attention has shifted to climate change, the field of AI is beginning to take note of its carbon cost. Research done at the Allen Institute for AI by Roy Schwartz et al. raises the question of whether efficiency, alongside accuracy, should become an important factor in AI research, and suggests that AI scientists ought to deliberate if the massive computational power needed for expensive processing of models, colossal amounts of training data, or huge numbers of experiments is justified by the degree of improvement in accuracy.”

Fast Track your AI Workflows

In this special guest feature, our friends over at Inspur write that for new workloads that are highly compute intensive, accelerators are often required. Accelerators can speed up the computation and allow for AI and ML algorithms to be used in real time. Inspur is a leading supplier of solutions for HPC and AI/ML workloads.

The Role of Middleware in Optimizing Vector Processing

This whitepaper delves into the world of unstructured data and describes some of the technologies, especially vector processors and their optimization software, that play key roles in solving the problems that arise as result of the accelerating amount of data generated globally.

Podcast: Advancing Deep Learning with Custom-Built Accelerators

In this Chip Chat podcast, Carey Kloss from Intel outlines the architecture and potential of the Intel Nervana NNP-T. He gets into major issues like memory and how the architecture was designed to avoid problems like becoming memory-locked, how the accelerator supports existing software frameworks like PaddlePaddle and TensorFlow, and what the NNP-T means for customers who want to keep on eye on power usage and lower TCO.

One Stop Systems does AI on the Fly at SC19

In this video from SC19, Jaan Mannik from One Stop Systems describes how the company delivers AI on the Fly. “With AI on the Fly, OSS puts computing and storage resources for the entire AI workflow, not in the datacenter, but on the edge near the sources of data. Applications are emerging for this new AI paradigm in diverse areas including autonomous vehicles, predictive personalized medicine, battlefield command and control, and industrial automation.”

The Eco-System of AI and How to Use It

Glyn Bowden from HPE gave this talk at the UK HPC Conference. “This presentation walks through HPE’s current view on AI applications, where it is driving outcomes and innovation, and where the challenges lay. We look at the eco-system that sits around an AI project and look at ways this can impact the success of the endeavor.”