Ingolf Wittmann from IBM presented this talk for the Switzerland HPC Conference. “This presentation will point out based on real examples how HPC environments can benefit from such solutions and technologies to drive cognitive solutions, machine/deep learning where we can ask ourselves, ‘What will be possible in the near future – can the future computers be smarter than humans?”
Today IBM announced that the first annual OpenPOWER Foundation Developer Congress will take place May 22-25 in San Francisco. With a focus on Machine Learning, the conference will focus on continuing to foster the collaboration within the foundation to satisfy the performance demands of today’s computing market.
“The basic idea of deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets. NVIDIA has invested in SaturnV, a large GPU-accelerated cluster, (#28 on the November 2016 Top500 list) to support internal machine learning projects. After an introduction to deep learning on GPUs, we will address a selection of open questions programmers and users may face when using deep learning for their work on these clusters.”
“Nimbix has tremendous experience in GPU cloud computing, going all the way back to NVIDIA’s Fermi architecture,” said Steve Hebert, CEO of Nimbix. “We are looking forward to accelerating deep learning and analytics applications for customers seeking the latest generation GPU technology available in a public cloud.”
Costas Bekas from IBM Research Zurich presented this talk at the Switzerland HPC Conference. “IBM Research builds applications that enable humans to collaborate with powerful AI technologies to discover, analyze and tackle the world’s greatest challenges. Humans are on the cusp of augmenting their lives in extraordinary ways with AI. At IBM Research Labs around the globe, we envision and develop next-generation systems that work side-by side with humans, accelerating our ability to create, learn, make decisions and think.”
“As the founding lead of the Google Brain project, and more recently through my role at Baidu, I have played a role in the transformation of two leading technology companies into “AI companies.” But AI’s potential is far bigger than its impact on technology companies. I will continue my work to shepherd in this important societal change. In addition to transforming large companies to use AI, there are also rich opportunities for entrepreneurship as well as further AI research.”
“Electricity transformed industries: agriculture, transportation, communication, manufacturing. I think we are now in that phase where AI technology has advanced to the point where we see a clear path for it to transform multiple industries.” Specifically, Ng sees AI being particularly influential in entertainment, retail, and logistics.
In this slidecast, Jem Davies (VP Engineering and ARM Fellow) gives a brief introduction to Machine Learning and explains how it is used in devices such as smartphones, autos, and drones. “I do think that machine learning altogether is probably going to be one of the biggest shifts in computing that we’ll see in quite a few years. I’m reluctant to put a number on it like — the biggest thing in 25 years or whatever,” said Jem Davies in a recent investor call. “But this is going to be big. It is going to affect all of us. It affects quite a lot of ARM, in fact.”
Intel-owned Movidius has introduced a fascinating new device called the Fathom Neural Compute Stick, a modular deep learning accelerator in the form of a standard USB stick. “The Fathom Neural Compute Stick is the first of its kind: A powerful, yet surprisingly efficient Deep Learning processor embedded into a standard USB stick. The Fathom Neural Compute Stick acts as a discrete neural compute accelerator, allowing devices with a USB port run neural networks at high speed, while sipping under a single Watt of power.”
“Computational science has come a long way with machine learning (ML) and deep learning (DL) in just the last year. Leading centers of high-performance computing are making great strides in developing and running ML/DL workloads on their systems. Users and algorithm scientists are continuing to optimize their codes and techniques that run their algorithms, while system architects work out the challenges they still face on various system architectures. At SC16, I had the honor of hosting three of HPC’s thought leaders in a panel to get their ideas about the state of Artificial Intelligence (AI), today’s challenges with the technology, and where it’s going.”