Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Intel Optimized Libraries Accelerate Deep Learning Applications on Intel Platforms

Whatever the platform, getting the best possible performance out of an application always presents big challenges. This is especially true when developing AI and machine learning applications on CPUs. This sponsored post from Intel explores how to effectively train and execute machine learning and deep learning projects on CPUs.

AI Chipmaker Hailo Releases Power-efficient Deep Learning Processor

Today AI chipmaker Hailo announced the “world’s top performing deep learning processor.” The company is now sampling its breakthrough Hailo 8 chip with select partners across multiple industries, with a focus on automotive. The chip is built with an innovative architecture that enables edge devices to run sophisticated deep learning applications that could previously run only on the cloud.

Intel Xeon Scalable Processors Set Deep Learning Performance Record on ResNet-50

Today Intel announced a deep learning performance record on image classification workloads. “Today, we have achieved leadership performance of 7878 images per second on ResNet-50 with our latest generation of Intel Xeon Scalable processors, outperforming 7844 images per second on Nvidia Tesla V100, the best GPU performance as published by Nvidia on its website including T4.”

IBM Research Applies Deep Learning for Detecting Glaucoma

Over at the IBM Blog, Rahil Garnavi writes that IBM researchers have developed new techniques in deep learning that could help unlock earlier glaucoma detection. “Earlier detection of glaucoma is critical to slowing its progression in individuals and its rise across our global population. Using deep learning to uncover valuable information in non-invasive, standard retina imaging could lay the groundwork for new and much more rapid glaucoma testing.”

Pioneers in Deep Learning to Receive ACM Turing Award

Today ACM named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. “The ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing,” carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing.”

Architecting the Right System for Your AI Application—without the Vendor Fluff

Brett Newman from Microway gave this talk at the Stanford HPC Conference. “Figuring out how to map your dataset or algorithm to the optimal hardware design is one of the hardest tasks in HPC. We’ll review what helps steer the selection of one system architecture from another for AI applications. Plus the right questions to ask of your collaborators—and a hardware vendor. Honest technical advice, no fluff.”

Podcast: Bill Dally from NVIDIA on What’s Next for AI

“NVIDIA researchers are gearing up to present 19 accepted papers and posters, seven of them during speaking sessions, at the annual Computer Vision and Pattern Recognition conference next week in Salt Lake City, Utah. Joining us to discuss some of what’s being presented at CVPR, and to share his perspective on the world of deep learning and AI in general is one of the pillars of the computer science world, Bill Dally, chief scientist at NVIDIA.”

Enhancing Diagnostic Quality and Productivity with AI

This report delves into many advances in clinical imaging being introduced through AI. The activity today is mostly focused on working with the current computational environment that exists in the radiologist’s laboratory, and examines how advanced medical instruments and software solutions that incorporate AI can augment a radiologist’s work. Download the new white paper from NVIDIA that explores increasing productivity with AI and how tools like deep learning can enhance offerings and cut down on cost. 

Xilinx Acquires DeePhi Tech, a Machine Learning Startup based in China

Today FPGA maker Xilinx announced that it has acquired DeePhi Technology, a Beijing-based privately held start-up with industry-leading capabilities in machine learning, specializing in deep compression, pruning, and system-level optimization for neural networks. “Xilinx will continue to invest in DeePhi Tech to advance our shared goal of deploying accelerated machine learning applications in the cloud as well as at the edge.”

Podcast: Deep Learning for Scientific Data Analysis

In this NERSC News Podcast, Debbie Bard from NERSC describes how Deep Learning can help scientists accelerate their research. “Deep learning is enjoying unprecedented success in a variety of commercial applications, but it is also beginning to find its footing in science. Just a decade ago, few practitioners could have predicted that deep learning-powered systems would surpass human-level performance in computer vision and speech recognition tasks.”