Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

At Virtual Nvidia GTC: Wednesday’s Sessions on HPC and AI at Scale

Here’s an overview of HPC-related sessions at Nvidia’s GTC virtual conference today. Note that recording of these sessions can be accessed by conference attendees (registration is free).

Accelerating Large-Scale AI and HPC in the Cloud – 2-2:40 pm ET

This session looks at Microsoft Azure’s NVIDIA A100-based VM instances for machine learning, deep learning and HPC, including. It includes demonstration of provisioning a single node for testing and bringing up an entire cluster of nodes for large-scale training or testing, including AI workloads like BERT and HPC workloads like HPL.

Presenters: Eddie Weill, Data Scientist & Solutions Architect, NVIDIA; Jon Shelley, HPC/AI Benchmarking Team, Principal PM Manager, Azure Compute, Microsoft

Accelerating HPC Applications with Arm and NVIDIA GPUs – 5-7 pm ET

This session focuses on how HPC CUDA applications built for x86 can be recompiled to run on Arm. Presenters will utilize the Oak Ridge National Laboratory “Wombat” cluster, based on 64-bit Arm architecture, to demonstrate the refactor of an x86-based HPC CUDA application to run on Arm, including the use of the high memory bandwidth Fujitsu A64fx processor to show how HPC applications that depend on CPU bandwidth can get additional speedups.

Presenters: Ross Miller, Software Developer, Oak Ridge National Laboratory; Robbie Searles, Solutions Architect, NVIDIA; Max Katz, Senior Solutions Architect, NVIDIA

Advancing Exascale: Faster, Smarter, and Greener – recorded earlier today

In this session, ATOS’s CTO of HPC, Jean-Pierre Panziera will discussed exascale supercomputers and their reliance on computing accelerators such as GPUs with higher floating point performance and memory bandwidth. He will also look at these accelerators enable new AI algorithms used in complex workflows to improve data assimilation, data analysis, or computing itself, and optimize HPC data center resource utilization.

Accelerating Health Care at Bayer with Science@Scale and Federated Learning – recorded earlier today

 In this session, David Ruau, Head of Global Data Assets & Decision Science, Bayer, discusses how a large pharmaceutical company with 150+ years of history transforms from a traditional to a digital player, looking at a cloud strategy and federated learning. Ruau looks at how Bayer uses GPUs both in cloud and on-premises for scientific discovery.

Realizing the Vision of an AI University – recorded earlier today

This panel will discuss how to develop a vision and mission for AI at a university. Topics include:
• Why AI deserves to be a university-wide vision
• How to ensure your university is ready to serve AI as a service
• The benefit and ROI of driving AI in the university
• Emphasis on interdisciplinary research

Panelists include Arnaud Renard, CEO ROMEO Regional Compute Center, University of Reims Champagne-Ardenne; Sean McGuire, Higher Education & Research, EMEA, NVIDIA; Marco Aldinucci, Professor, University of Torino, Italy; Hujun Yin, Professor, The University of Manchester; Wolfgang Nagel, Director Center of Information Services and High Performance Computing, TU Dresden

Benchmarking GPU Clusters with the Jülich Universal Quantum Computer Simulator – recorded earlier today

This session examines simulating quantum computers, a versatile way to benchmark supercomputers with thousands of GPUs. It includes discussion of quantum computer simulators from a linear algebra perspective using the Jülich Universal Quantum Computer Simulator (JUQCS) as an example. It shows how the memory-, network-, and computation-intensive operations of JUQCS can be used to benchmark high-performance computers.

Presenter: Dennis Willsch, Postdoctoral Researcher, Forschungszentrum Jülich GmbH

Grid: A High-Performance and Portable Code for Quantum Chromodynamics – recorded earlier today

This session will look a portable data parallel high-level interface for structured grid problems to GPU clusters and other architectures. The library in C++11 can handle multidimensional arrays distributed across an entire cluster, and can target modern CPUs, Cuda, HIP, and SyCL. It has library support for constructing optimized PDE stencil operators on Cartesian grids, and for convenience has F90-like Cshift constructs that operate on a whole GPU cluster simultaneously.

Presenter: Peter Boyle, Professor, University of Edinburgh and Brookhaven National Laboratory

Materials Design Toward the Exascale: Porting Electronic Structure Community Codes to GPUs – recorded earlier today

This session examines the way materials are crucial to science and technology, and their connections to major societal challenges from energy and environment to information and communication, and manufacturing. Electronic structure methods have become key to materials simulations, allowing scientists to study and design new materials before running actual experiments. The MaX Centre of Excellence — Materials design at the eXascale — is focused on materials modeling at the frontiers of HPC architectures. The presenters discuss the performance and portability of MaX flagship codes, with a special focus on GPU accelerators. Porting on GPUs has been demonstrated (all codes released as GPU-ready) following diverse strategies to address both performance and maintainability, while keeping the community engaged.

Presenters: Andrea Ferretti, Senior Researcher and Chair of the MaX Executive Committee, CNR – Nanoscience Institute; Ivan Carnimeo, Post-Doc Researcher, International School for Advanced Studies (SISSA)

 

Leave a Comment

*

Resource Links: