Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: House Hearing on American Leadership in Quantum Technology

In this video. the House Subcommittee on Research & Technology and Subcommittee on Energy holds a hearing on American Leadership in Quantum Technology. “Quantum technology can completely transform many areas of science and a wide array of technologies, including sensors, lasers, materials science, GPS, and much more. Quantum computers have the potential to solve complex problems that are beyond the scope of today’s most powerful supercomputers. Quantum-enabled data analytics can revolutionize the development of new medicines and materials and assure security for sensitive information.”

GPUs Power Near-global Climate Simulation at 1 km Resolution

A new peer-reviewed paper is reportedly causing a stir in the climatology community. “The best hope for reducing long-standing global climate model biases, is through increasing the resolution to the kilometer scale. Here we present results from an ultra-high resolution non-hydrostatic climate model for a near-global setup running on the full Piz Daint supercomputer on 4888 GPUs.”

Accelerating Quantum Chemistry for Drug Discovery

In the pharmaceutical industry, drug discovery is a long and expensive process. This sponsored post from Nvidia explores how the University of Florida and University of North Carolina developed an anakin-me neural network engine to produce computationally fast quantum mechanical simulations with high accuracy at a very low cost to speed drug discovery and exploration.

Fujitsu to Build 37 Petaflop AI Supercomputer for AIST in Japan

Nikkei in Japan reports that Fujitsu is building a 37 Petaflop supercomputer for the National Institute of Advanced Industrial Science and Technology (AIST). “Targeted at Deep Learning workloads, the machine will power the AI research center at the University of Tokyo’s Chiba Prefecture campus. The new Fujitsu system feature will comprise 1,088 servers, 2,176 Intel Xeon processors, and 4,352 NVIDIA GPUs.”

No speed limit on NVIDIA Volta with rise of AI

In this special guest feature, Brad McCredie from IBM writes that launch of Volta GPUs from NVIDIA heralds a new era of AI. “We’re excited about the launch of NVIDIA’s Volta GPU accelerators. Together with the NVIDIA NVLINK “information superhighway” at the core of our IBM Power Systems, it provides what we believe to be the closest thing to an unbounded platform for those working in machine learning and deep learning and those dealing with very large data sets.”

Infinite Memory Engine: HPC in the FLASH Era

In this RichReport slidecast, James Coomer from DDN presents an overview of the Infinite Memory Engine IME. “IME is a scale-out, flash-native, software-defined, storage cache that streamlines the data path for application IO. IME interfaces directly to applications and secures IO via a data path that eliminates file system bottlenecks. With IME, architects can realize true flash-cache economics with a storage architecture that separates capacity from performance.”

Scaling Deep Learning Algorithms on Extreme Scale Architectures

Abhinav Vishnu from PNNL gave this talk at the MVAPICH User Group. “Deep Learning (DL) is ubiquitous. Yet leveraging distributed memory systems for DL algorithms is incredibly hard. In this talk, we will present approaches to bridge this critical gap. Our results will include validation on several US supercomputer sites such as Berkeley’s NERSC, Oak Ridge Leadership Class Facility, and PNNL Institutional Computing.”

Slidecast: How Optalysys Accelerates FFTs with Optical Processing

In this RichReport slidecast, Dr. Nick New from Optalysys describes how the company’s optical processing technology delivers accelerated performance for FFTs and Bioinformatics. “Our prototype is on track to achieve game-changing improvements to process times over current methods whilst providing high levels of accuracy that are associated with the best software processes.”

China Upgrading Milky Way 2 Supercomputer to 95 Petaflops

Researchers in China are busy upgrading the MilkyWay 2 (Tianhe-2) system to nearly 95 Petaflops (peak). This should nearly double the performance of the system, which is currently ranked at #2 on TOP500 with 33.86 Petaflops on the Linpack benchmark. The upgraded system, dubbed Tianhe -2A, should be completed in the coming months.

NASA Perspectives on Deep Learning

Nikunj Oza from NASA Ames gave this talk at the HPC User Forum. “This talk will give a broad overview of work at NASA in the space of data sciences, data mining, machine learning, and related areas at NASA. This will include work within the Data Sciences Group at NASA Ames, together with other groups at NASA and university and industry partners. We will delineate our thoughts on the roles of NASA, academia, and industry in advancing machine learning to help with NASA problems.”