Fujitsu to Build 37 Petaflop AI Supercomputer for AIST in Japan

Nikkei in Japan reports that Fujitsu is building a 37 Petaflop supercomputer for the National Institute of Advanced Industrial Science and Technology (AIST). “Targeted at Deep Learning workloads, the machine will power the AI research center at the University of Tokyo’s Chiba Prefecture campus. The new Fujitsu system feature will comprise 1,088 servers, 2,176 Intel Xeon processors, and 4,352 NVIDIA GPUs.”

No speed limit on NVIDIA Volta with rise of AI

In this special guest feature, Brad McCredie from IBM writes that launch of Volta GPUs from NVIDIA heralds a new era of AI. “We’re excited about the launch of NVIDIA’s Volta GPU accelerators. Together with the NVIDIA NVLINK “information superhighway” at the core of our IBM Power Systems, it provides what we believe to be the closest thing to an unbounded platform for those working in machine learning and deep learning and those dealing with very large data sets.”

Infinite Memory Engine: HPC in the FLASH Era

In this RichReport slidecast, James Coomer from DDN presents an overview of the Infinite Memory Engine IME. “IME is a scale-out, flash-native, software-defined, storage cache that streamlines the data path for application IO. IME interfaces directly to applications and secures IO via a data path that eliminates file system bottlenecks. With IME, architects can realize true flash-cache economics with a storage architecture that separates capacity from performance.”

Scaling Deep Learning Algorithms on Extreme Scale Architectures

Abhinav Vishnu from PNNL gave this talk at the MVAPICH User Group. “Deep Learning (DL) is ubiquitous. Yet leveraging distributed memory systems for DL algorithms is incredibly hard. In this talk, we will present approaches to bridge this critical gap. Our results will include validation on several US supercomputer sites such as Berkeley’s NERSC, Oak Ridge Leadership Class Facility, and PNNL Institutional Computing.”

Slidecast: How Optalysys Accelerates FFTs with Optical Processing

In this RichReport slidecast, Dr. Nick New from Optalysys describes how the company’s optical processing technology delivers accelerated performance for FFTs and Bioinformatics. “Our prototype is on track to achieve game-changing improvements to process times over current methods whilst providing high levels of accuracy that are associated with the best software processes.”

China Upgrading Milky Way 2 Supercomputer to 95 Petaflops

Researchers in China are busy upgrading the MilkyWay 2 (Tianhe-2) system to nearly 95 Petaflops (peak). This should nearly double the performance of the system, which is currently ranked at #2 on TOP500 with 33.86 Petaflops on the Linpack benchmark. The upgraded system, dubbed Tianhe -2A, should be completed in the coming months.

GPUs Accelerate Population Distribution Mapping Around the Globe

With the Earth’s population at 7 billion and growing, understanding population distribution is essential to meeting societal needs for infrastructure, resources and vital services. This article highlights how NVIDIA GPU-powered AI is accelerating mapping and analysis of population distribution around the globe. “If there is a disaster anywhere in the world,” said Bhaduri, “as soon as we have imaging we can create very useful information for responders, empowering recovery in a matter of hours rather than days.”

Machine & Deep Learning: Practical Deployments and Best Practices for the Next Two Years

Arno Kolster from Providentia Worldwide gave this talk at the HPC User Forum in Milwaukee. “Providentia Worldwide is a new venture in technology and solutions consulting which bridges the gap between High Performance Computing and Enterprise Hyperscale computing. We take the best practices from the most demanding compute environments in the world and apply those techniques and design patterns to your business.”

New OrionX Survey: Insights in Artificial Intelligence

In this Radio Free HPC podcast, Dan Olds and Shahin Khan from OrionX describe their new AI Survey. “OrionX Research has completed one the most comprehensive surveys to date of Artificial Intelligence, Machine Learning, and Deep Learning. With over 300 respondents in North America, representing 13 industries, our model indicates a confidence level of 95% and a margin of error of 6%. Covering 144 questions/data points, it provides a comprehensive view of what customers are doing and planning to do with AI/ML/DL.”

Video: Europe’s HPC Strategy

Leonardo Flores from the European Commission gave this talk at the HPC User Forum in Milwaukee. “High-Performance Computing is a strategic resource for Europe’s future as it allows researchers to study and understand complex phenomena while allowing policy makers to make better decisions and enabling industry to innovate in products and services. The European Commission funds projects to address these needs.”