Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Podcast: Bill Dally from NVIDIA on What’s Next for AI

“NVIDIA researchers are gearing up to present 19 accepted papers and posters, seven of them during speaking sessions, at the annual Computer Vision and Pattern Recognition conference next week in Salt Lake City, Utah. Joining us to discuss some of what’s being presented at CVPR, and to share his perspective on the world of deep learning and AI in general is one of the pillars of the computer science world, Bill Dally, chief scientist at NVIDIA.”

Enhancing Diagnostic Quality and Productivity with AI

This report delves into many advances in clinical imaging being introduced through AI. The activity today is mostly focused on working with the current computational environment that exists in the radiologist’s laboratory, and examines how advanced medical instruments and software solutions that incorporate AI can augment a radiologist’s work. Download the new white paper from NVIDIA that explores increasing productivity with AI and how tools like deep learning can enhance offerings and cut down on cost. 

Xilinx Acquires DeePhi Tech, a Machine Learning Startup based in China

Today FPGA maker Xilinx announced that it has acquired DeePhi Technology, a Beijing-based privately held start-up with industry-leading capabilities in machine learning, specializing in deep compression, pruning, and system-level optimization for neural networks. “Xilinx will continue to invest in DeePhi Tech to advance our shared goal of deploying accelerated machine learning applications in the cloud as well as at the edge.”

Podcast: Deep Learning for Scientific Data Analysis

In this NERSC News Podcast, Debbie Bard from NERSC describes how Deep Learning can help scientists accelerate their research. “Deep learning is enjoying unprecedented success in a variety of commercial applications, but it is also beginning to find its footing in science. Just a decade ago, few practitioners could have predicted that deep learning-powered systems would surpass human-level performance in computer vision and speech recognition tasks.”

Extreme Scale Deep Learning at NERSC

Thorsten Kurth from LBNL gave this talk at the PASC18 conference. “We present various studies on very large scale distributed deep learning on HPC systems including the ~10k node Intel Xeon-Phi-based Cori system at NERSC. We explore CNN classification architectures and generative adversarial networks for HEP problems using large images corresponding to full LHC detectors and high-resolution cosmology convergence maps.”

DDN Steps Up to HPC & AI Workloads at ISC 2018

In this video from ISC 2018, James Coomer from DDN describes the company’s latest high performance storage technologies for AI and HPC workloads. “Attendees at ISC 2018 learned how organizations around the world are leveraging DDN’s people, technology, performance and innovation to achieve their greatest visions and make revolutionary insights and discoveries! Designed, optimized and right-sized for Commercial HPC, Higher Education and Exascale Computing, our full range of  DDN products and solutions are changing the landscape of HPC and delivering the most value with the greatest operational efficiency.”

NVIDIA Offers Framework to Solve AI System Challenges

At the recent NVIDIA GPU Technology Conference (GTC) 2018, Jensen Huang, NVIDIA President and CEO, during his presentation focused on a new framework designed to contextualize the key challenges using AI systems and delivering deep learning-based solutions. A new white paper sponsored by NVIDIA outlines these requirements — coined PLASTER.

PLASTER: A Framework for Deep Learning Performance

Both hardware and software advances in deep learning (DL), a type of ML, appear to be catalysts for the early stages of a phenomenal AI growth trend. Download the new white paper from NVIDIA that addresses the challenges described in PLASTER, which is important in any  deep learning solution, and it is especially useful for developing and delivering the inference engines underpinning AI-based services. 

Deep Learning Open Source Framework Optimized on Apache Spark*

Intel recently released BigDL. It’s an open source, highly optimized, distributed, deep learning framework for Apache Spark*. It makes Hadoop/Spark into a unified platform for data storage, data processing and mining, feature engineering, traditional machine learning, and deep learning workloads, resulting in better economy of scale, higher resource utilization, ease of use/development, and better TCO.

NVIDIA Releases Code for Accelerated Machine Learning

Today NVIDIA made a number of announcements centered around Machine Learning software at the Computer Vision and Pattern Recognition Conference in Salt Lake City. “NVIDIA is kicking off the conference by demonstrating an early release of Apex, an open-source PyTorch extension that helps users maximize deep learning training performance on NVIDIA Volta GPUs. Inspired by state of the art mixed precision training in translational networks, sentiment analysis, and image classification, NVIDIA PyTorch developers have created tools bringing these methods to all levels of PyTorch users.”