Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Artificial Intelligence: It’s No Longer Science Fiction

“Computational science has come a long way with machine learning (ML) and deep learning (DL) in just the last year. Leading centers of high-performance computing are making great strides in developing and running ML/DL workloads on their systems. Users and algorithm scientists are continuing to optimize their codes and techniques that run their algorithms, while system architects work out the challenges they still face on various system architectures. At SC16, I had the honor of hosting three of HPC’s thought leaders in a panel to get their ideas about the state of Artificial Intelligence (AI), today’s challenges with the technology, and where it’s going.”

Adrian Cockcroft Presents: Shrinking Microservices to Functions

In this fascinating talk, Cockcroft describes how hardware networking has reshaped how services like Machine Learning are being developed rapidly in the cloud with AWS Lamda. “We’ve seen the same service oriented architecture principles track advancements in technology from the coarse grain services of SOA a decade ago, through microservices that are usually scoped to a more fine grain single area of responsibility, and now functions as a service, serverless architectures where each function is a separately deployed and invoked unit.”

Deep Learning & HPC: New Challenges for Large Scale Computing

“In recent years, major breakthroughs were achieved in different fields using deep learning. From image segmentation, speech recognition or self-driving cars, deep learning is everywhere. Performance of image classification, segmentation, localization have reached levels not seen before thanks to GPUs and large scale GPU-based deployments, leading deep learning to be a first class HPC workload.”

Defining AI, Machine Learning, and Deep Learning

“In this guide, we take a high-level view of AI and deep learning in terms of how it’s being used and what technological advances have made it possible. We also explain the difference between AI, machine learning and deep learning, and examine the intersection of AI and HPC. We also present the results of a recent insideBIGDATA survey to see how well these new technologies are being received. Finally, we take a look at a number of high-profile use case examples showing the effective use of AI in a variety of problem domains.”

Agenda Posted for Next Week’s HPC Advisory Council Stanford Conference

“Over two days we’ll delve into a wide range of interests and best practices – in applications, tools and techniques and share new insights on the trends, technologies and collaborative partnerships that foster this robust ecosystem. Designed to be highly interactive, the open forum will feature industry notables in keynotes, technical sessions, workshops and tutorials. These highly regarded subject matter experts (SME’s) will share their works and wisdom covering everything from established HPC disciplines to emerging usage models from old-school architectures and breakthrough applications to pioneering research and provocative results. Plus a healthy smattering of conversation and controversy on endeavors in Exascale, Big Data, Artificial Intelligence, Machine Learning and much much more!”

Intel FPGAs Break Record for Deep Learning Facial Recognition

Today Intel announced record results on a new benchmark in deep learning and convolutional neural networks (CNN). ZTE’s engineers used Intel’s midrange Arria 10 FPGA for a cloud inferencing application using a CNN algorithm. “ZTE has achieved a new record – beyond a thousand images per second in facial recognition – with what is known as “theoretical high accuracy” achieved for their custom topology. Intel’s Arria 10 FPGA accelerated the raw design performance more than 10 times while maintaining the accuracy.”

IBM Adds TensorFlow Support for PowerAI Deep Learning

Today IBM announced that its PowerAI distribution for popular open source Machine Learning and Deep Learning frameworks on the POWER8 architecture now supports the TensorFlow 0.12 framework that was originally created by Google. TensorFlow support through IBM PowerAI provides enterprises with another option for fast, flexible, and production-ready tools and support for developing advanced machine learning products and systems.

CUDA Made Easy: An Introduction

“CUDA C++ is just one of the ways you can create massively parallel applications with CUDA. It lets you use the powerful C++ programming language to develop high performance algorithms accelerated by thousands of parallel threads running on GPUs. Many developers have accelerated their computation- and bandwidth-hungry applications this way, including the libraries and frameworks that underpin the ongoing revolution in artificial intelligence known as Deep Learning.”

D-Wave Rolls Out 2000 Qubit System

“D-Wave’s leap from 1000 qubits to 2000 qubits is a major technical achievement and an important advance for the emerging field of quantum computing,” said Earl Joseph, IDC program vice president for high performance computing. “D-Wave is the only company with a product designed to run quantum computing problems, and the new D-Wave 2000Q system should be even more interesting to researchers and application developers who want to explore this revolutionary new approach to computing.”

Video: A Look at the Lincoln Laboratory Supercomputing Center

“Guided by the principles of interactive supercomputing, Lincoln Laboratory was responsible for a lot of the early work on machine learning and neural networks. We now have a world-class group investigating speech and video processing as well as machine language topics including theoretical foundations, algorithms and applications. In the process, we are changing the way we go about computing. Over the years we have tended to assign a specific systems to service a discrete market, audience or project. But today those once highly specialized systems are becoming increasingly heterogeneous. Users are interacting with computational resources that exhibit a high degree of autonomy. The system, not the user, decides on the computer hardware and software that will be used for the job.”