Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Jack Dongarra Presents: Adaptive Linear Solvers and Eigensolvers

Jack Dongarra presented this talk at the Argonne Training Program on Extreme-Scale Computing. “ATPESC provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

White House Releases Report on the Future of Artificial Intelligence

Today, to ready the United States for a future in which Artificial Intelligence (AI) plays a growing role, the White House is releasing a report on future directions and considerations for AI called Preparing for the Future of Artificial Intelligence. This report surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy.”

Video: How to Reclaim your Application Performance

In this video from the HPC Advisory Council Spain Conference, Martin Hilgeman from Dell Technologies provides a detailed overview of how to approach code optimization through providing more parallelism. “Martin Hilgeman brings perspectives of a system builder to the massively parallel performance discussion – examining the continuous advances in multi-core architectures and its impact on users and computational work.”

NYU Advances Robotics with Nvidia DGX-1 Deep Learning Supercomputer

In this video, NYU researchers describe their plans to advance deep learning with their new Nvidia DGX-1 AI supercomputer. “The DGX-1 is going to be used in just about every research project we have here,” said Yann LeCun, founding director of the NYU Center for Data Science and a pioneer in the field of AI. “The students here can’t wait to get their hands on it.”

Cheyenne – NCAR’s Next-Gen Data-Centric Supercomputer

In this video, Dave Hart, CISL User Services Manager presents: Cheyenne – NCAR’s Next-Generation Data-Centric Supercomputing Environment. “Cheyenne is a new 5.34-petaflops, high-performance computer built for NCAR by SGI. The hardware was delivered on Monday, September 12, at the NCAR-Wyoming Supercomputing Center (NWSC) and the system is on schedule to become operational at the beginning of 2017. All of the compute racks were powered up and nodes booted up within a few days of delivery.”

High Performance Interconnects: Assessment & Rankings

In this video from the HPC Advisory Council Spain Conference, Dan Olds from OrionX discusses the High Performance Interconnect (HPI) market landscape, plus provides ratings and rankings of HPI choices today. “In this talk, we’ll take a look at the technologies and performance of high-end networking technology and the coming battle between onloading vs. offloading interconnect architectures.”

First Look: BeeGFS File System at CSCS

In this video from the HPC Advisory Council Spain Conference, Hussein Harake provides an overview of the CSCS and then introduces the audience to the BeeGFS parallel file system. “BeeGFS (formerly FhGFS) is an up and coming parallel cluster file system for I/O intensive workloads. Developed with a strong focus on performance, BeeGFS was designed for very easy installation and management.”

Video: Introduction to Parallel Supercomputing

Pete Beckman presented this talk at the Argonne Training Program on Extreme-Scale Computing. “Here is the Parallel Platform Paradox: The average time required to implement a moderate-sized application on a parallel computer architecture is equivalent to the half-life of the latest parallel supercomputer.”

AI & Robotics Front and Center at GTC Japan

Robotics and Deep Learning applications were front and center at GTC Japan this week, where 2600 attendees lined up to hear the latest on GPU technologies. The age of AI is here,” said Jen-Hsun Huang, founder and CEO of NVIDIA. “‎GPU deep learning ignited this new wave of computing where software learns and machines reason. […]

Video: Sustainable High-Performance Computing through Data Science

Ozalp Babaoglu from the University of Bologna presented this Google Talk. “At exascale, failures and errors will be frequent, with many instances occurring daily. This fact places resilience squarely as another major roadblock to sustainability. In this talk, I will argue that large computer systems, including exascale HPC systems, will ultimately be operated based on predictive computational models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing “nuts-and-bolts” operations.”