Using Ai to Automatically Diagnose Alzheimer’s Disease

Researchers from Stanford University have developed a deep learning based system that can automatically detect Alzheimer’s disease and its biomarkers from MRIs, with 94 percent accuracy. “Our method uses minimal preprocessing of MRIs (imposing minimum preprocessing artifacts) and utilizes a simple data augmentation strategy of downsampled MR images for training purposes,” the researchers stated in their paper.

Exploiting HPC Technologies for Accelerating Big Data Processing and Associated Deep Learning

DK Panda from Ohio State University gave this talk at the Swiss HPC Conference. “This talk will provide an overview of challenges in accelerating Hadoop, Spark, and Memcached on modern HPC clusters. An overview of RDMA-based designs for Hadoop (HDFS, MapReduce, RPC and HBase), Spark, Memcached, Swift, and Kafka using native RDMA support for InfiniBand and RoCE will be presented. Enhanced designs for these components to exploit NVM-based in-memory technology and parallel file systems (such as Lustre) will also be presented.”

Video: IBM Sets Record TensorFlow Performance with new Snap ML Software

In this video, researchers from IBM Research in Zurich describe how the new IBM Snap Machine Learning (Snap ML) software was able to achieve record performance running TesorFlow. “This training time is 46x faster than the best result that has been previously reported, which used TensorFlow on Google Cloud Platform to train the same model in 70 minutes.”

Google Cloud TPU Machine Learning Accelerators now in Beta

John Barrus writes that Cloud TPUs are available in beta on Google Cloud Platform to help machine learning experts train and run their ML models more quickly. “Cloud TPUs are a family of Google-designed hardware accelerators that are optimized to speed up and scale up specific ML workloads programmed with TensorFlow. Built with four custom ASICs, each Cloud TPU packs up to 180 teraflops of floating-point performance and 64 GB of high-bandwidth memory onto a single board.

High Performance Inferencing with TensorRT

Chris Gottbrath from NVIDIA gave this talk at SC17 in Denver. “This talk will introduce the TensorRT Programmable Inference Accelerator which enables high throughput and low latency inference on clusters with NVIDIA V100, P100, P4 or P40 GPUs. TensorRT is both an optimizer and runtime – users provide a trained neural network and can easily creating highly efficient inference engines that can be incorporated into larger applications and services.”

Scaling Deep Learning Algorithms on Extreme Scale Architectures

Abhinav Vishnu from PNNL gave this talk at the MVAPICH User Group. “Deep Learning (DL) is ubiquitous. Yet leveraging distributed memory systems for DL algorithms is incredibly hard. In this talk, we will present approaches to bridge this critical gap. Our results will include validation on several US supercomputer sites such as Berkeley’s NERSC, Oak Ridge Leadership Class Facility, and PNNL Institutional Computing.”

Deep Learning Comes to the Student Cluster Competition

“We decided to use a slightly more interesting use case of solving for Captcha because it not only highlights the power of deep learning to be a useful tool create models to recognize and classify unwieldy data, such as distorted characters, grainy images and overlapping characters, but it also demonstrates that it is possible for this powerful technology to be used in less positive ways, such as solving for security or privacy. Realizing that everyone has access to the tools we use to move society forward, we need to be aware of the possible mis-use, especially as it becomes more pervasive across industry, healthcare, financial services, and the like.”

IBM Adds TensorFlow Support for PowerAI Deep Learning

Today IBM announced that its PowerAI distribution for popular open source Machine Learning and Deep Learning frameworks on the POWER8 architecture now supports the TensorFlow 0.12 framework that was originally created by Google. TensorFlow support through IBM PowerAI provides enterprises with another option for fast, flexible, and production-ready tools and support for developing advanced machine learning products and systems.

New Bright for Deep Learning Solution Designed for Business

“We have enhanced Bright Cluster Manager 7.3 so our customers can quickly and easily deploy new deep learning techniques to create predictive applications for fraud detection, demand forecasting, click prediction, and other data-intensive analyses,” said Martijn de Vries, Chief Technology Officer of Bright Computing. “Going forward, customers using Bright to deploy and manage clusters for deep learning will not have to worry about finding, configuring, and deploying all of the dependent software components needed to run deep learning libraries and frameworks.”

Google Open Sources TensorFlow for Machine Learning

In a surprise move, Google has open-sourced TensorFlow artificial intelligence software. The powerful machine learning engine is one of Google’s successful experiments and helps the search giant in automated search and analysis.