Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Enter Your Machine Learning Code in the Cognitive Cup

“OpenPOWER is all about creating a broad ecosystem with opportunities to accelerate your workloads. For the Cognitive Cup, we provide two types of accelerators: GPUs and FPGAs. GPUs are used by the Deep Learning framework to train your neural network. When you want to use the neural network during the “classification” phase, you have a choice of Power CPUs, GPUs and FPGAs.”

China Develops Darwin Neuromorphic Chip

Researchers from Zhejiang University and Hangzhou Dianzi University in China have developed the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on Spiking Neural Networks, fabricated by standard CMOS technology. “Its potential applications include intelligent hardware systems, robotics, brain-computer interfaces, and others. Since it uses spikes for information processing and transmission, similar to biological neural networks, it may be suitable for analysis and processing of biological spiking neural signals, and building brain-computer interface systems by interfacing with animal or human brains.”

Interview: Baidu Speeds Deep Learning with GPU Clusters

“Deep neural networks are increasingly important for powering AI-based applications like speech recognition. Baidu’s research shows that adding GPUs to the data center makes deploying big deep neural networks practical at scale. Deep learning based technologies benefit from batching user requests in the data center, which requires a different software architecture than traditional web applications.”

Nvidia Speeds Up Deep Learning Software

Today Nvidia updated its GPU-accelerated deep learning software to accelerate deep learning training performance. With new releases of DIGITS and cuDNN, the new software provides significant performance enhancements to help data scientists create more accurate neural networks through faster model training and more sophisticated model design.

Podcast: Geoffrey Hinton on the Rise of Deep Learning

“In Deep Learning what we do is try to minimize the amount of hand engineering and get the neural nets to learn, more or less, everything. Instead of programing computers to do particular tasks, you program the computer to know how to learn. And then you can give it any old task, and the more data and the more computation you provide, the better it will get.”