Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Neurala Reduces Training Time for Deep Neural Network Technology

Today Neurala announced a breakthrough update to its award-winning Lifelong Deep Neural Network (Lifelong-DNN) technology. The update allows for a significant reduction in training time compared to traditional DNN—20 seconds versus 15 hours—a reduction in overall data needs, and the ability for deep learning neural networks to learn without the risk of forgetting previous knowledge—with or without the cloud. “It takes a very long time to train a traditional DNN on a dataset, and, once that happens, it must be completely re-trained if even a single piece of new information is added. Our technology allows for a massive reduction in the time it takes to train a neural network and all but eliminates the time it takes to add new information,” said Anatoli Gorshechnikov, CTO and co-founder of Neurala. “Our Lifelong-DNN is the only AI solution that allows for incremental learning and is the breakthrough that companies across many industries have needed to make deep learning useful for their customers.”

PASC18 Panel to Focus on Fast and Big Data, Fast and Big Computation

Today the PASC18 conference announced that this year’s panel discussion will focus on the central theme of the conference: “Fast and Big Data, Fast and Big Computation.” Are these two worlds evolving and converging together? Or is HPC facing a game-changing moment as the appetite for computation in the scientific computing community and industry is for a different type of computation than what we’re used to?

The Need for Deep Learning Transparency

Steve Conway from Hyperion Research gave this talk at the HPC User Forum. “We humans don’t fully understand how humans think. When it comes to deep learning, humans also don’t understand yet how computers think. That’s a big problem when we’re entrusting our lives to self-driving vehicles or to computers that diagnose serious diseases, or to computers installed to protect national security. We need to find a way to make these “black box” computers transparent.”

Intel FPGAs Power Realtime AI in the Azure cloud

At the Microsoft Build conference held this week, Microsoft announced Azure Machine Learning Hardware Accelerated Models powered by Project Brainwave integrated with the Microsoft Azure Machine Learning SDK. In this configuration, customers gain access to industry-leading artificial intelligence inferencing performance for their models using Azure’s large-scale deployments of Intel FPGA (field programmable gate array) technology. “With today’s announcement, customers can now utilize Intel’s FPGA and Intel Xeon technologies to use Microsoft’s stream of AI breakthroughs on both the cloud and the edge.”

Call for Papers: High Performance Machine Learning Workshop – HPML 2018

The HPML 2018 High Performance Machine Learning Workshop has issued its Call for Papers. The event takes place September 24 in Lyon, France. “This workshop is intended to bring together the Machine Learning (ML), Artificial Intelligence (AI) and High Performance Computing (HPC) communities. In recent years, much progress has been made in Machine Learning and Artificial Intelligence in general.”

POWER9 for AI & HPC

Jeff Stuecheli from IBM gave this talk at the HPC User Forum in Tucson. “Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI.”

D-Wave Launches Quadrant Business Unit for Machine Learning

Today D-Wave Systems launch its new Quadrant business unit, formed to provide machine learning services that make state-of-the-art deep learning accessible to companies across a wide range of industries and application areas. Quadrant’s algorithms enable accurate discriminative learning (predicting outputs from inputs) using less data by constructing generative models which jointly model both inputs and outputs. “Quadrant is a natural extension of the scientific and technological advances from D-Wave as we continue to explore new applications for our quantum systems.” 

New AI Performance Milestones with NVIDIA Volta GPU Tensor Cores

Over at the NVIDIA blog, Loyd Case shares some recent advancements that deliver dramatic performance gains on GPUs to the AI community. “We have achieved record-setting ResNet-50 performance for a single chip and single server with these improvements. Recently, fast.ai also announced their record-setting performance on a single cloud instance. A single V100 Tensor Core GPU achieves 1,075 images/second when training ResNet-50, a 4x performance increase compared to the previous generation Pascal GPU.”

HPC in Ontario, Canada

Dr. Chris Loken gave this talk at the HPC User Forum. “We collaborate with our partners to centralize strategy and planning for Ontario’s advanced computing assets, including hardware, software, data management, storage, storage, security, connectivity and Highly Qualified Personnel. Together, we strive to address concerns about Ontario’s capacity to supply advanced computing at the level required for leading research and enabling industrial competitiveness.”

Quobyte Joins STAC Benchmark Council to help Financial Services Solve Storage Challenges

Today hyperscale storage provider Quobyte announced that it has accepted an invitation to join the Securities Technology Analysis Center (STAC) Benchmark Council, an influential group of technologists in the finance industry. “Quobyte provides massively scalable software storage to allow the industry to take advantage of the insights they can gain from the extensive amount of historical trading data they have collected,” said Björn Kolbeck, Quobyte’s CEO and Co-Founder. “We’ve found that Quobyte storage software running on a cluster of industry standard whitebox servers can significantly reduce time spent on backtesting — literally converting time saved into money.”