Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NASA Perspectives on Deep Learning

Nikunj Oza from NASA Ames gave this talk at the HPC User Forum. “This talk will give a broad overview of work at NASA in the space of data sciences, data mining, machine learning, and related areas at NASA. This will include work within the Data Sciences Group at NASA Ames, together with other groups at NASA and university and industry partners. We will delineate our thoughts on the roles of NASA, academia, and industry in advancing machine learning to help with NASA problems.”

GPUs Accelerate Population Distribution Mapping Around the Globe

With the Earth’s population at 7 billion and growing, understanding population distribution is essential to meeting societal needs for infrastructure, resources and vital services. This article highlights how NVIDIA GPU-powered AI is accelerating mapping and analysis of population distribution around the globe. “If there is a disaster anywhere in the world,” said Bhaduri, “as soon as we have imaging we can create very useful information for responders, empowering recovery in a matter of hours rather than days.”

Machine & Deep Learning: Practical Deployments and Best Practices for the Next Two Years

Arno Kolster from Providentia Worldwide gave this talk at the HPC User Forum in Milwaukee. “Providentia Worldwide is a new venture in technology and solutions consulting which bridges the gap between High Performance Computing and Enterprise Hyperscale computing. We take the best practices from the most demanding compute environments in the world and apply those techniques and design patterns to your business.”

AI Breakthroughs and Initiatives at the Pittsburgh Supercomputing Center

Nick Nystrom and Paola Buitrago from PSC gave this talk at the HPC User Forum in Milwaukee. “The Bridges supercomputer at PSC offers the possibility for experts in fields that never before used supercomputers to tackle problems in Big Data and answer questions based on information that no human would live long enough to study by reading it directly.”

Video: Characterization and Benchmarking of Deep Learning

 Natalia Vassilieva from HP Labs gave this talk at the HPC User Forum in Milwaukee. “Our Deep Learning Cookbook is based on a massive collection of performance results for various deep learning workloads on different hardware/software stacks, and analytical performance models. This combination enables us to estimate the performance of a given workload and to recommend an optimal hardware/software stack for that workload. Additionally, we use the Cookbook to detect bottlenecks in existing hardware and to guide the design of future systems for artificial intelligence and deep learning.”

New OrionX Survey: Insights in Artificial Intelligence

In this Radio Free HPC podcast, Dan Olds and Shahin Khan from OrionX describe their new AI Survey. “OrionX Research has completed one the most comprehensive surveys to date of Artificial Intelligence, Machine Learning, and Deep Learning. With over 300 respondents in North America, representing 13 industries, our model indicates a confidence level of 95% and a margin of error of 6%. Covering 144 questions/data points, it provides a comprehensive view of what customers are doing and planning to do with AI/ML/DL.”

Call for Participation: GTC 2018 in San Jose

The GPU Technology Conference (GTC 2018) has issued their Call for Participation. The event takes place March 26-29 in San Jose, California. “Don’t miss this unique opportunity to participate in the world’s most important GPU event, NVIDIA’s GPU Technology Conference (GTC 2018). Sign up to present a talk, poster, or lab on how GPUs power the most dynamic areas in computing today—including AI and deep learning, big data analytics, healthcare, smart cities, IoT, HPC, VR, and more.”

Trends in the Worldwide HPC Market

In this video from the HPC User Forum in Milwaukee, Earl Joseph and Steve Conway from Hyperion Research present and update on HPC, AI, and Storage markets. “Hyperion Research forecasts that the worldwide HPC server-based AI market will expand at a 29.5% CAGR to reach more than $1.26 billion in 2021, up more than three-fold from $346 million in 2016.”

Oak Ridge Turns to Deep Learning for Big Data Problems

The Advances in Machine Learning to Improve Scientific Discovery at Exascale and Beyond (ASCEND) project aims to use deep learning to assist researchers in making sense of massive datasets produced at the world’s most sophisticated scientific facilities. Deep learning is an area of machine learning that uses artificial neural networks to enable self-learning devices and platforms. The team, led by ORNL’s Thomas Potok, includes Robert Patton, Chris Symons, Steven Young and Catherine Schuman.

Heroes of Deep Learning: Andrew Ng interviews Pieter Abbeel

In this video from the Heroes of Deep Learning series, Andrew Ng interviews Pieter Abbeel from UC Berkeley. “Work in Artificial Intelligence in the EECS department at Berkeley involves foundational research in core areas of knowledge representation, reasoning, learning, planning, decision-making, vision, robotics, speech and language processing. There are also significant efforts aimed at applying algorithmic advances to applied problems in a range of areas, including bioinformatics, networking and systems, search and information retrieval.”