March 7, 2024 — NVIDIA and HP Inc. today announced that NVIDIA CUDA-X data processing libraries will be integrated with HP AI workstation solutions to turbocharge the data preparation and processing work that forms the foundation of generative AI development. Built on the NVIDIA CUDA compute platform, CUDA-X libraries speed data processing for a broad range of data […]
NVIDIA Rolls Out New Quadro Pascal GPUs
“Professional workflows are now infused with artificial intelligence, virtual reality and photorealism, creating new challenges for our most demanding users,” said Bob Pette, vice president of Professional Visualization at NVIDIA. “Our new Quadro lineup provides the graphics and compute performance required to address these challenges. And, by unifying compute and design, the Quadro GP100 transforms the average desktop workstation with the power of a supercomputer.”
Examples of Deep Learning Industrialization
Humans are very good at visual pattern recognition especially when it comes to facial features and graphic symbols and identifying a specific person or associating a specific symbol with an associated meaning. It is in these kinds of scenarios where deep learning systems excel. Clearly identifying each new person or symbol is more efficiently achieved by a training methodology than by needing to reprogram a conventional computer or explicitly update database entries.
Software Framework for Deep Learning
Deep learning solutions are typically a part of a broader high performance analytics function in for profit enterprises, with a requirement to deliver a fusion of business and data requirements. In addition to support large scale deployments, industrial solutions typically require portability, support for a range of development environments, and ease of use.
Components For Deep Learning
The recent introduction of new high end processors from Intel combined with accelerator technologies such as NVIDIA Tesla GPUs and Intel Xeon Phi provide the raw ‘industry standard’ materials to cobble together a test platform suitable for small research projects and development. When combined with open source toolkits some meaningful results can be achieved, but wide scale enterprise deployment in production environments raises the infrastructure, software and support requirements to a completely different level.
The Core Technologies for Deep Learning
Given the compute and data intensive nature of deep learning which has significant overlaps with the needs of the high performance computing market, theTOP500 list provides a good proxy of the current market dynamics and trends. From the central computation perspective, today’s multicore processor architectures dominate the TOP500 with 91% based on Intel processors. However, looking forwards we can expect to see further developments that may include core CPU architectures such as OpenPOWER and ARM.
The Industrialization of Deep Learning – Intro
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information.
Meet Apollo – Revolutionizing HPC and the Supercomputer
It’s a different kind of computing world out there. The demand for more compute performance for applications used by engineering, risk modeling, or life sciences is relentless. So, how are you keeping up with modern HPC demands? Meet Apollo – creating next-gen HPC and super-computing.
HPC People on the Move: October Edition
I’ve been commissioned by insideHPC to get the scoop on who’s jumping ship and moving on up in high performance computing. Familiar names this week include Mary Bass, Wilf Pinfold, and Mike Vildibill.
HP and SanDisk to Team on Memory-Driven Computing
Today HP and SanDisk announced a long-term partnership to collaborate on a new technology within the Storage Class Memory (SCM) category. The partnership will center around HP’s Memristor technology and expertise and SanDisk’s non-volatile ReRAM memory technology and manufacturing and design expertise to create new enterprise-wide solutions for Memory-driven Computing. The two companies also will partner in enhancing data center solutions with SSDs.