Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Quantum Storage Solutions Power Self-driving Cars for AutonomouStuff

Today Quantum Corporation named AutonomouStuff LLC as its primary partner for storage distribution in the automotive market, enabling them to deliver Quantum’s comprehensive end-to-end storage solutions for both in-vehicle and data center environments. “Autonomous research generates an enormous volume of data which is vital to achieving the goal of a safe autonomous vehicle,” said Bobby Hambrick, founder and CEO of AutonomouStuff. “Quantum multitier data storage kits powered by StorNext offer a highly scalable and economical solution to the data dilemma researchers face.”

Red Hat’s AI Strategy

“The impact of AI will be visible in the software industry much sooner than the analog world, deeply affecting open source in general, as well as Red Hat, its ecosystem, and its userbase. This shift provides a huge opportunity for Red Hat to offer unique value to our customers. In this session, we’ll provide Red Hat’s general perspective on AI and how we are helping our customers benefit from AI.”

AI Podcast Looks at Recent Developments at NVIDIA Research

In this episode of the AI Podcast, Bryan Cantanzaro from NVIDIA discusses some of the latest developments at NVIDIA research. “The goal of NVIDIA research is to figure out what things are going to change the future of the company, and then build prototypes that show the company how to do that,” says Catanzaro. “And AI is a good example of that.”

Deep Learning at Scale for Cosmology Research

In this video from Google I/O 2018, Debbie Bard from NERSC describes Deep Learning at scale for cosmology research. “Debbie Bard is acting group lead for the Data Science Engagement Group at the National Energy Research Scientific Computing Center (NERSC) at Berkeley National Lab. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic.”

Neurala Reduces Training Time for Deep Neural Network Technology

Today Neurala announced a breakthrough update to its award-winning Lifelong Deep Neural Network (Lifelong-DNN) technology. The update allows for a significant reduction in training time compared to traditional DNN—20 seconds versus 15 hours—a reduction in overall data needs, and the ability for deep learning neural networks to learn without the risk of forgetting previous knowledge—with or without the cloud. “It takes a very long time to train a traditional DNN on a dataset, and, once that happens, it must be completely re-trained if even a single piece of new information is added. Our technology allows for a massive reduction in the time it takes to train a neural network and all but eliminates the time it takes to add new information,” said Anatoli Gorshechnikov, CTO and co-founder of Neurala. “Our Lifelong-DNN is the only AI solution that allows for incremental learning and is the breakthrough that companies across many industries have needed to make deep learning useful for their customers.”

The Need for Deep Learning Transparency

Steve Conway from Hyperion Research gave this talk at the HPC User Forum. “We humans don’t fully understand how humans think. When it comes to deep learning, humans also don’t understand yet how computers think. That’s a big problem when we’re entrusting our lives to self-driving vehicles or to computers that diagnose serious diseases, or to computers installed to protect national security. We need to find a way to make these “black box” computers transparent.”

Intel FPGAs Power Realtime AI in the Azure cloud

At the Microsoft Build conference held this week, Microsoft announced Azure Machine Learning Hardware Accelerated Models powered by Project Brainwave integrated with the Microsoft Azure Machine Learning SDK. In this configuration, customers gain access to industry-leading artificial intelligence inferencing performance for their models using Azure’s large-scale deployments of Intel FPGA (field programmable gate array) technology. “With today’s announcement, customers can now utilize Intel’s FPGA and Intel Xeon technologies to use Microsoft’s stream of AI breakthroughs on both the cloud and the edge.”

Call for Papers: High Performance Machine Learning Workshop – HPML 2018

The HPML 2018 High Performance Machine Learning Workshop has issued its Call for Papers. The event takes place September 24 in Lyon, France. “This workshop is intended to bring together the Machine Learning (ML), Artificial Intelligence (AI) and High Performance Computing (HPC) communities. In recent years, much progress has been made in Machine Learning and Artificial Intelligence in general.”

POWER9 for AI & HPC

Jeff Stuecheli from IBM gave this talk at the HPC User Forum in Tucson. “Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI.”

D-Wave Launches Quadrant Business Unit for Machine Learning

Today D-Wave Systems launch its new Quadrant business unit, formed to provide machine learning services that make state-of-the-art deep learning accessible to companies across a wide range of industries and application areas. Quadrant’s algorithms enable accurate discriminative learning (predicting outputs from inputs) using less data by constructing generative models which jointly model both inputs and outputs. “Quadrant is a natural extension of the scientific and technological advances from D-Wave as we continue to explore new applications for our quantum systems.”