Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Microsoft Cognitive Toolkit Updates for Deep Learning Advances

Today Microsoft released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and Nvidia GPUs. “We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.

The Intelligent Industrial Revolution

“Over the past six weeks, we took NVIDIA’s developer conference on a world tour. The GPU Technology Conference (GTC) was started in 2009 to foster a new approach to high performance computing using massively parallel processing GPUs. GTC has become the epicenter of GPU deep learning — the new computing model that sparked the big bang of modern AI. It’s no secret that AI is spreading like wildfire. The number of GPU deep learning developers has leapt 25 times in just two years.”

Video: HPC Opportunities in Deep Learning

“This talk will provide empirical evidence from our Deep Speech work that application level performance (e.g. recognition accuracy) scales with data and compute, transforming some hard AI problems into problems of computational scale. It will describe the performance characteristics of Baidu’s deep learning workloads in detail, focusing on the recurrent neural networks used in Deep Speech as a case study. It will cover challenges to further improving performance, describe techniques that have allowed us to sustain 250 TFLOP/s when training a single model on a cluster of 128 GPUs, and discuss straightforward improvements that are likely to deliver even better performance.”

insideHPC Research Report – GPU Accelerators

In this research report, we reveal recent research showing that customers are feeling the need for speed—i.e. they’re looking for more processing cores. Not surprisingly, we found that they’re investing more money in accelerators like GPUs and moreover are seeing solid positive results from using GPUs. In the balance of this report, we take a look at the newest GPU tech from NVIDIA and how it performs vs. traditional servers and earlier GPU products. Download this guide to learn more.

HPC Advisory Council China Conference Returns to Xi’an Oct. 26

The HPC Advisory Council has posted their agenda for their upcoming China Conference. The event takes place Oct. 26 in Xi’an, China. “We invite you to join us on Wednesday, October 26th, in Xi’an for our annual China Conference. This year’s agenda will focus on Deep learning, Artificial Intelligence, HPC productivity, advanced topics and futures. Join fellow technologists, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High Performance Computing.”

Radio Free HPC Looks into the New OpenCAPI Consortium

In this podcast, the Radio Free HPC team looks at the new OpenCAPI interconnect standard. “Released this week by the newly formed OpenCAPI Consortium, OpenCAPI provides an open, high-speed pathway for different types of technology – advanced memory, accelerators, networking and storage – to more tightly integrate their functions within servers. This data-centric approach to server design, which puts the compute power closer to the data, removes inefficiencies in traditional system architectures to help eliminate system bottlenecks and can significantly improve server performance.”

New OpenCAPI Consortium to Boost Server Performance 10x

“IBM has decided to double down on our commitment to open standards and enablement of industry innovation by opening up access to our CAPI technology to the entire industry. With the support of our OpenCAPI co-founders, we have created a new OpenCAPI specification that tremendously improves performance over our prior specification and IBM will be among the first to implement it with our POWER9 products expected in 2017.”

NYU Advances Robotics with Nvidia DGX-1 Deep Learning Supercomputer

In this video, NYU researchers describe their plans to advance deep learning with their new Nvidia DGX-1 AI supercomputer. “The DGX-1 is going to be used in just about every research project we have here,” said Yann LeCun, founding director of the NYU Center for Data Science and a pioneer in the field of AI. “The students here can’t wait to get their hands on it.”

NVLink Speeds Deep Learning on New OpenPOWER Servers

Over at the IBM System Blog, Sumit Gupta writes that the company’s new IBM Power System 822LC with Nvidia Tesla P100 GPUs is already demonstrating impressive performance on Deep Learning training applications. “A single S822LC for HPC with four NVIDIA Tesla P100 GPUs is 2.2 times faster reaching 50 percent accuracy in AlexNet than a server with four NVIDIA Tesla M40 GPUs!”

AI & Robotics Front and Center at GTC Japan

Robotics and Deep Learning applications were front and center at GTC Japan this week, where 2600 attendees lined up to hear the latest on GPU technologies. The age of AI is here,” said Jen-Hsun Huang, founder and CEO of NVIDIA. “‎GPU deep learning ignited this new wave of computing where software learns and machines reason. […]