Today NVIDIA announced APAC’s first deployment of NVIDIA DGX-1 deep learning supercomputers CSIRO in Australia. “There is a growing interest from research groups to adopt machine learning techniques to support their projects,” said Angus Macoustra, executive manager for Scientific Computing at CSIRO. “CSIRO research projects are already using the DGX-1 systems, and in time, it is expected that machine learning will have applicability across all our areas of research and be used by hundreds of researchers.”
This may indeed be the year of artificial intelligence, when the technology came into its own for mainstream businesses. “But will other companies understand if AI has value for them? Perhaps a better question is “Why now?” This question centers on both the opportunity and why many companies are scared about missing out.”
“We are still in the first minutes of the first day of the Intelligence revolution. In this keynote, Dr. Joseph Sirosh will present 5 solutions (and their implementations) that the intelligent cloud delivers. Sirosh shares five cloud AI patterns that his team and presented at the Summit. These five patterns are really about ways to bring data and learning together in cloud services, to infuse intelligence.”
Today Amazon Web Services announced the availability of P2 instances, a new GPU instance type for Amazon Elastic Compute Cloud designed for compute-intensive applications that require massive parallel floating point performance, including artificial intelligence, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering. With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are the most powerful GPU instances available in the cloud.
SC16 will continue its HPC Matters Plenary session series this year with a panel discussion on HPC and Precision Medicine. The event will take place at 5:30 pm on Monday, Nov 14 just prior to the exhibits opening gala. “The success of all of these research programs hinge on harnessing the power of HPC to analyze volumes of complex genomics and other biological datasets that simply can’t be processed by humans alone. The challenge for our community will be to develop the computing tools and services needed to transform how we think about disease and bring us closer to the precision medicine future.”
Today at GTC Europe, Nvidia unveiled Xavier, an all-new SoC based on the company’s next-gen Volta GPU, which will be the processor in future self-driving cars. According to Huang, the ARM-based Xavier will feature unprecedented performance and energy efficiency, while supporting deep-learning features important to the automotive market. A single Xavier-based AI car supercomputer will be able to replace today’s fully configured DRIVE PX 2 with two Parker SoCs and two Pascal GPUs.
“We are at an inflection point in the big data era,” said Bob Picciano, senior vice president, IBM Analytics. “We know that users spend up to 80 percent of their time on data preparation, no matter the task, even when they are applying the most sophisticated AI. Project DataWorks helps transform this challenge by bringing together all data sources on one common platform, enabling users to get the data ready for insight and action, faster than ever before.”
“Deep learning developers and researchers want to train neural networks as fast as possible. Right now we are limited by computing performance,” said Dr. Diamos. “The first step in improving performance is to measure it, so we created DeepBench and are opening it up to the deep learning community. We believe that tracking performance on different hardware platforms will help processor designers better optimize their hardware for deep learning applications.”
The European Fortissimo Project has issued its Second Call for Proposals. Fortissimo is a collaborative project that enables European SMEs to be more competitive globally through the use of simulation services running on High Performance Computing Cloud infrastructure.
Today ArrayFire released the latest version of their ArrayFire open source library of parallel computing functions supporting CUDA, OpenCL, and CPU devices. ArrayFire v3.4 improves features and performance for applications in machine learning, computer vision, signal processing, statistics, finance, and more.