GPU Technology Conference Returns to DC in November

NVIDIA’s GPU Technology Conference will returns to Washington, D.C. Nov 5-6, 2019. “GTC is the premier AI and deep learning conference series, providing you with training, insights, and direct access to experts from NVIDIA and other leading organizations. Join NVIDIA for the latest breakthroughs in self-driving cars, accelerated data science, healthcare, big data, high-performance computing, virtual reality, and more.”

Simplifying AI, Data Science, and HPC Workloads with NVIDIA GPU Cloud

Adel El Hallak and Philip Rogers from NVIDIA gave this talk at the GPU Technology Conference. “Whether it’s for AI, data science and analytics, or HPC, GPU-Accelerated software can make possible the previously impossible. But it’s well known that these cutting edge software tools are often complex to use, hard to manage, and difficult to deploy. We’ll explain how NGC solves these problems and gives users a head start on their projects by simplifying the use of GPU-Optimized software.”

Video: The Human Side of AI

 In this video from the GPU Technology Conference, Dan Olds from OrionX discusses the human impact of AI with Greg Schmidt from HPE. The industry buzz about artificial intelligence and deep learning typically focuses on hardware, software, frameworks,  performance, and the lofty business plans that will be enabled by this new technology. What we don’t […]

Video: Advancing U.S. Weather Prediction Capabilities with Exascale HPC

Mark Govett from NOAA gave this talk at the GPU Technology Conference. “We’ll discuss the revolution in computing, modeling, data handling and software development that’s needed to advance U.S. weather-prediction capabilities in the exascale computing era. Creating prediction models to cloud-resolving 1 KM-resolution scales will require an estimated 1,000-10,000 times more computing power, but existing models can’t exploit exascale systems with millions of processors. We’ll examine how weather-prediction models must be rewritten to incorporate new scientific algorithms, improved software design, and use new technologies such as deep learning to speed model execution, data processing, and information processing.”

Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer

Jack Wells from ORNL gave this talk at the GPU Technology Conference. “HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We’ll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows.”

Video: Prepare for Production AI with the HPE AI Data Node

In this video from GTC 2019 in San Jose, Harvey Skinner, Distinguished Technologist, discusses the advent of production AI and how the HPE AI Data Node offers a building block for AI storage. “The HPE AI Data Node is a HPE reference configuration which offers a storage solution that provides both the capacity for data, as well as a performance tier that meets the throughput requirements of GPU servers. The HPE Apollo 4200 Gen10 density optimized data server provides the hardware platform for the WekaIO Matrix flash-optimized parallel file system, as well as the Scality RING object store.”

NVIDIA GPUs Speed Altair OptiStruct structural analysis up to 10x

Last week at GTC, Altair announced that it has achieved up to 10x speedups with the Altair OptiStruct structural analysis solver on NVIDIA GPU-accelerated system architecture — with no compromise in accuracy. This speed boost has the potential to significantly impact industries including automotive, aerospace, industrial equipment, and electronics that frequently need to run large, high-fidelity simulations. “This breakthrough represents a significant opportunity for our customers to increase productivity and improve ROI with a high level of accuracy, much faster than was previously possible,” said Uwe Schramm, Altair’s chief technology officer for solvers and optimization. “By running our solvers on NVIDIA GPUs, we achieved formidable results that will give users a big advantage.”

Video: IBM Powers Ai at the GPU Technology Conference

In this video from the GPU Technology Conference, Sumit Gupta from IBM describes how IBM is powering production-level Ai and Machine Learning. “IBM PowerAI provides the easiest on-ramp for enterprise deep learning. PowerAI helped users break deep learning training benchmarks AlexNet and VGGNet thanks to the world’s only CPU-to-GPU NVIDIA NVLink interface. See how new feature development and performance optimizations will advance the future of deep learning in the next twelve months, including NVIDIA NVLink 2.0, leaps in distributed training, and tools that make it easier to create the next deep learning breakthrough.”

Video: NVIDIA Showcases Programmable Acceleration of multiple Domains with one Architecture

In this video from GTC 2019 in Silicon Valley, Marc Hamilton from NVIDIA describes how accelerated computing is powering AI, computer graphics, data science, robotics, automotive, and more. “Well, we always make so many great announcements at GTC. But one of the traditions Jensen has now started a few years ago is coming up with a new acronym to really make our messaging for the show very, very simple to remember. So PRADA stands for Programmable Acceleration Multiple Domains One Architecture. And that’s really what the GPU has become.”

Oracle Cloud Speeds HPC & Ai Workloads at GTC 2019

In this video from the GPU Technology Conference, Karan Batta from Oracle describes how the company provides HPC and Machine Learning in the Cloud with Bare Metal speed. ” Oracle Cloud Infrastructure offers wide-ranging support for NVIDIA GPUs, including the high-performance NVIDIA Tesla P100 and V100 GPU instances that provide the highest ratio of CPU cores and RAM per GPU available. With a maximum of 52 physical CPU cores, 8 NVIDIA Volta V100 units per bare metal server, 768 GB of memory, and two 25 Gbps interfaces, these are the most powerful GPU instances on the market.”