Deutsche Bank in AI Partnership with NVIDIA: Risk Models, HPC and a 3D Virtual Avatar
December 7, 2022 by
insideHPC in association with the technology analyst firm OrionX.net today announced the launch of the @HPCpodcast, featuring OrionX.net analyst Shahin Khan and Doug Black, insideHPC’s editor-in-chief. @HPCpodcast is intended to be a lively and informative forum examining key technology trends driving high performance computing and artificial intelligence. Each podcast will feature Khan and Blacks’ comments on the latest HPC news and also a deeper dive into a focused topic. In our first @HPCpodcast episode, we talk about a recent spate of good news for Intel before taking up one of the hottest areas of the advanced computing arena: new HPC-AI chips. You can find the @HPCpodcast on insideHPC and on Twitter. Here’s the RSS feed: http://orionx.net/category/audio-podcast/feed We welcome your suggestions [READ MORE…]
Today, every high-performance computing (HPC) workload running globally faces the same crippling issue: Congestion in the network. Congestion can delay workload completion times for crucial scientific and enterprise workloads, making HPC systems unpredictable and leaving high-cost cluster resources waiting for delayed data to arrive. Despite various brute-force attempts to resolve the congestion issue, the problem has persisted. Until now. In this paper, Matthew Williams, CTO at Rockport Networks, explains how recent innovations in networking technologies have led to a new network architecture that targets the root causes of HPC network congestion, specifically: - Why today’s network architectures are not a sustainable approach to HPC workloads - How HPC workload congestion and latency issues are directly tied to the network architecture - Why a direct interconnect network architecture minimizes congestion and tail latency
Copyright © 2023 · News Theme for Inside HPC on Genesis Framework · WordPress · Log in
The goal is to develop regulatory-compliant AI-powered services and support, for example, Deutsche Bank’s cloud transformation strategy by using AI and ML to simplify and accelerate cloud migration decisions.
The collaboration follows exploratory work in which the companies said they tested potential use cases with a focus on three in particular: risk model development, high-performance computing, and the creation of a branded virtual avatar.
Accelerated computing enables traders to manage risk and run more scenarios faster and at scale while also improving energy efficiency. Supported by NVIDIA’s AI, ML and accelerated compute knowledge, the partnership will enrich Deutsche Bank’s work in risk management, improve efficiency and enhance customer service.
To enable this work, Deutsche Bank plans to leverage NVIDIA AI Enterprise, an end-to-end software suite for streamlining AI development and deployment that can run in the cloud or in the data center. With this flexibility, Deutsche Bank’s AI developers, data scientists and IT professionals will be able to run NVIDIA AI workflows on-premises as well as on Google Cloud, Deutsche Bank’s public cloud provider.
Deutsche Bank is working to develop next-generation user experiences with NVIDIA Omniverse Enterprise, an open computing platform for building and operating metaverse applications, and AI models and services that makes it easier to build and customize lifelike virtual assistants and digital humans.
Extracting key information from unstructured data has long been a challenge for organizations, especially those in financial services. Existing large language models do not perform well on financial texts. Deutsche Bank and NVIDIA are testing a collection of large language models called Financial Transformers. These will run AI and ML models and achieve outcomes such as early warning signs on the counterparty of a financial transaction, faster data retrieval and identifying data-quality issues.
The innovation partnership is an integral part of Deutsche Bank’s ongoing technology transformation. Deutsche Bank will expand an internal AI center of excellence, accelerated through the partnership, to support the experimentation and development of AI and ML services and professional skills development. The center will also develop, foster and promote explainable and responsible AI to expand the understanding of model predictions in financial services applications, and exploration in the areas of AI and accelerated computing.
“AI, ML and data will be a game changer in banking, and our partnership with NVIDIA is further evidence that we are committed to redefining what is possible for our clients,” said Christian Sewing, CEO, Deutsche Bank.
“This partnership is a significant step forward in our AI and ML ambitions. It will help us take a leading position in the usage of these technologies in financial services,” added Bernd Leukert, Deutsche Bank’s Management Board Member responsible for Technology, Data and Innovation.
“Accelerated computing and AI are at a tipping point, and we’re bringing them to the world’s enterprises through the cloud,” said Jensen Huang, founder and CEO, NVIDIA. “Every aspect of future business will be supercharged with insight and intelligence running at the speed of light. Together with Deutsche Bank, we are modernizing and reimagining the way financial services are operated and delivered.”