Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Andrew Ng Leaving Baidu for Next Chapter in AI

“As the founding lead of the Google Brain project, and more recently through my role at Baidu, I have played a role in the transformation of two leading technology companies into “AI companies.” But AI’s potential is far bigger than its impact on technology companies. I will continue my work to shepherd in this important societal change. In addition to transforming large companies to use AI, there are also rich opportunities for entrepreneurship as well as further AI research.”

Andrew Ng on why Artificial Intelligence is the New Electricity

“Electricity transformed industries: agriculture, transportation, communication, manufacturing. I think we are now in that phase where AI technology has advanced to the point where we see a clear path for it to transform multiple industries.” Specifically, Ng sees AI being particularly influential in entertainment, retail, and logistics.

Slidecast: ARM Steps Up to Machine Learning

In this slidecast, Jem Davies (VP Engineering and ARM Fellow) gives a brief introduction to Machine Learning and explains how it is used in devices such as smartphones, autos, and drones. “I do think that machine learning altogether is probably going to be one of the biggest shifts in computing that we’ll see in quite a few years. I’m reluctant to put a number on it like — the biggest thing in 25 years or whatever,” said Jem Davies in a recent investor call. “But this is going to be big. It is going to affect all of us. It affects quite a lot of ARM, in fact.”

Fathom Neural Compute Stick Enables Mobile Devices to React Cognitively

Intel-owned Movidius has introduced a fascinating new device called the Fathom Neural Compute Stick, a modular deep learning accelerator in the form of a standard USB stick. “The Fathom Neural Compute Stick is the first of its kind: A powerful, yet surprisingly efficient Deep Learning processor embedded into a standard USB stick. The Fathom Neural Compute Stick acts as a discrete neural compute accelerator, allowing devices with a USB port run neural networks at high speed, while sipping under a single Watt of power.”

Artificial Intelligence: It’s No Longer Science Fiction

“Computational science has come a long way with machine learning (ML) and deep learning (DL) in just the last year. Leading centers of high-performance computing are making great strides in developing and running ML/DL workloads on their systems. Users and algorithm scientists are continuing to optimize their codes and techniques that run their algorithms, while system architects work out the challenges they still face on various system architectures. At SC16, I had the honor of hosting three of HPC’s thought leaders in a panel to get their ideas about the state of Artificial Intelligence (AI), today’s challenges with the technology, and where it’s going.”

Adrian Cockcroft Presents: Shrinking Microservices to Functions

In this fascinating talk, Cockcroft describes how hardware networking has reshaped how services like Machine Learning are being developed rapidly in the cloud with AWS Lamda. “We’ve seen the same service oriented architecture principles track advancements in technology from the coarse grain services of SOA a decade ago, through microservices that are usually scoped to a more fine grain single area of responsibility, and now functions as a service, serverless architectures where each function is a separately deployed and invoked unit.”

Deep Learning & HPC: New Challenges for Large Scale Computing

“In recent years, major breakthroughs were achieved in different fields using deep learning. From image segmentation, speech recognition or self-driving cars, deep learning is everywhere. Performance of image classification, segmentation, localization have reached levels not seen before thanks to GPUs and large scale GPU-based deployments, leading deep learning to be a first class HPC workload.”

Defining AI, Machine Learning, and Deep Learning

“In this guide, we take a high-level view of AI and deep learning in terms of how it’s being used and what technological advances have made it possible. We also explain the difference between AI, machine learning and deep learning, and examine the intersection of AI and HPC. We also present the results of a recent insideBIGDATA survey to see how well these new technologies are being received. Finally, we take a look at a number of high-profile use case examples showing the effective use of AI in a variety of problem domains.”

Agenda Posted for Next Week’s HPC Advisory Council Stanford Conference

“Over two days we’ll delve into a wide range of interests and best practices – in applications, tools and techniques and share new insights on the trends, technologies and collaborative partnerships that foster this robust ecosystem. Designed to be highly interactive, the open forum will feature industry notables in keynotes, technical sessions, workshops and tutorials. These highly regarded subject matter experts (SME’s) will share their works and wisdom covering everything from established HPC disciplines to emerging usage models from old-school architectures and breakthrough applications to pioneering research and provocative results. Plus a healthy smattering of conversation and controversy on endeavors in Exascale, Big Data, Artificial Intelligence, Machine Learning and much much more!”

Intel FPGAs Break Record for Deep Learning Facial Recognition

Today Intel announced record results on a new benchmark in deep learning and convolutional neural networks (CNN). ZTE’s engineers used Intel’s midrange Arria 10 FPGA for a cloud inferencing application using a CNN algorithm. “ZTE has achieved a new record – beyond a thousand images per second in facial recognition – with what is known as “theoretical high accuracy” achieved for their custom topology. Intel’s Arria 10 FPGA accelerated the raw design performance more than 10 times while maintaining the accuracy.”