Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

NVIDIA Launches Deep Learning Teaching Kit for University Professors

“With demand for graduates with AI skills booming, we’ve released the NVIDIA Deep Learning Teaching Kit to help educators give their students hands on experience with GPU-accelerated computing. The kit — co-developed with deep-learning pioneer Yann LeCun, and largely based on his deep learning course at New York University — was announced Monday at the NIPS machine learning conference in Barcelona. Thanks to the rapid development of NVIDIA GPUs, training deep neural networks is more efficient than ever in terms of both time and resource cost. The result is an AI boom that has given machines the ability to perceive — and understand — the world around us in ways that mimic, and even surpass, our own.”

Students Learn HPC at ARL Supercomputing Summer Institute

The DOD High Performance Computing Program and the U.S. Army Research Laboratory (ARL) hosted the Supercomputing Summer Institute July 18-29, 2016, at Aberdeen Proving Ground, Maryland. The program introduced students to high-tech learning opportunities.

Allinea to Offer New Software Performance Briefings at SC16

Today Allinea Software announced that the compay will hold series of New Software Performance Briefings at SC16. The briefings will be held at the Allinea booth #1508 in Salt Lake City. This year at SC, we’re giving booth visitors the opportunity to find out more about what they don’t know about their software performance,” said […]

Video: HPC Opportunities in Deep Learning

“This talk will provide empirical evidence from our Deep Speech work that application level performance (e.g. recognition accuracy) scales with data and compute, transforming some hard AI problems into problems of computational scale. It will describe the performance characteristics of Baidu’s deep learning workloads in detail, focusing on the recurrent neural networks used in Deep Speech as a case study. It will cover challenges to further improving performance, describe techniques that have allowed us to sustain 250 TFLOP/s when training a single model on a cluster of 128 GPUs, and discuss straightforward improvements that are likely to deliver even better performance.”

WACCPD Workshop at SC16 to Focus on Using Directives for Accelerators

The third Workshop on Accelerator Programming Using Directives (WACCPD) has posted their meeting agenda. Held in conjunction with SC16, the WACCPD workshop takes place Nov. 14 in Salt Lake City. “To address the rapid pace of hardware evolution, developers continue to explore and add richer features to the various (parallel) programming standards. Domain scientists continue to explore the programming and tools space while preparing themselves for future Exascale systems. This workshop explores innovative language features – their implementations, compilation & runtime scheduling techniques, performance optimization strategies, autotuning tools exploring the optimization space and so on. WACCPD has been one of the major forums for bringing together the users, developers and tools community to share their knowledge and experiences of using directives and similar approaches to program emerging complex systems.”

Jack Dongarra Presents: Adaptive Linear Solvers and Eigensolvers

Jack Dongarra presented this talk at the Argonne Training Program on Extreme-Scale Computing. “ATPESC provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Video: Introduction to Parallel Supercomputing

Pete Beckman presented this talk at the Argonne Training Program on Extreme-Scale Computing. “Here is the Parallel Platform Paradox: The average time required to implement a moderate-sized application on a parallel computer architecture is equivalent to the half-life of the latest parallel supercomputer.”

Preview: SC16 Tutorial on How to Buy a Supercomputer

“This tutorial, part of the SC16 State of the Practice, will guide attendees through the process of purchasing and deploying a HPC system. It will cover the whole process from engaging with stakeholders in securing funding, requirements capture, market survey, specification of the tender/request for proposal documents, engaging with suppliers, evaluating proposals, and managing the installation. Attendees will learn how to specify what they want, yet enable the suppliers to provide innovative solutions beyond their specification both in technology and in the price; how to demonstrate to stakeholders that the solution selected is best value for money; and the common risks, pitfalls and mitigation strategies essential to achieve an on-time and on-quality installation process.”

Video: Intel Xeon Phi (KNL) Processor Overview

Adrian Jackson from EPCC at the University of Edinburgh presented this tutorial to ARCHER users. “We have been working for a number of years on porting computational simulation applications to the KNC, with varying successes. We were keen to test this new processor with its promise of 3x serial performance compared to the KNC and 5x memory bandwidth over normal processors (using the high-bandwidth, MCDRAM, memory attached to the chip).”

Video: How ORNL is Bridging the Gap between Computing and Facilities

“Starting in 2015, Oak Ridge National Laboratory partnered with the University of Tennessee to offer a minor-degree program in data center technology and management, one of the first offerings of its kind in the country. ORNL staff members developed the senior-level course in collaboration with UT College of Engineering professor Mark Dean after an ORNL strategic partner identified a need for employees who could bridge both the facilities and operational aspects of running a data center. In addition to developing the course curriculum, ORNL staff members are also serving as guest lecturers.”