“Professional workflows are now infused with artificial intelligence, virtual reality and photorealism, creating new challenges for our most demanding users,” said Bob Pette, vice president of Professional Visualization at NVIDIA. “Our new Quadro lineup provides the graphics and compute performance required to address these challenges. And, by unifying compute and design, the Quadro GP100 transforms the average desktop workstation with the power of a supercomputer.”
Humans are very good at visual pattern recognition especially when it comes to facial features and graphic symbols and identifying a specific person or associating a specific symbol with an associated meaning. It is in these kinds of scenarios where deep learning systems excel. Clearly identifying each new person or symbol is more efficiently achieved by a training methodology than by needing to reprogram a conventional computer or explicitly update database entries.
Deep learning solutions are typically a part of a broader high performance analytics function in for profit enterprises, with a requirement to deliver a fusion of business and data requirements. In addition to support large scale deployments, industrial solutions typically require portability, support for a range of development environments, and ease of use.
The recent introduction of new high end processors from Intel combined with accelerator technologies such as NVIDIA Tesla GPUs and Intel Xeon Phi provide the raw ‘industry standard’ materials to cobble together a test platform suitable for small research projects and development. When combined with open source toolkits some meaningful results can be achieved, but wide scale enterprise deployment in production environments raises the infrastructure, software and support requirements to a completely different level.
Given the compute and data intensive nature of deep learning which has significant overlaps with the needs of the high performance computing market, theTOP500 list provides a good proxy of the current market dynamics and trends. From the central computation perspective, today’s multicore processor architectures dominate the TOP500 with 91% based on Intel processors. However, looking forwards we can expect to see further developments that may include core CPU architectures such as OpenPOWER and ARM.
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information.
It’s a different kind of computing world out there. The demand for more compute performance for applications used by engineering, risk modeling, or life sciences is relentless. So, how are you keeping up with modern HPC demands? Meet Apollo – creating next-gen HPC and super-computing.
I’ve been commissioned by insideHPC to get the scoop on who’s jumping ship and moving on up in high performance computing. Familiar names this week include Mary Bass, Wilf Pinfold, and Mike Vildibill.
Today HP and SanDisk announced a long-term partnership to collaborate on a new technology within the Storage Class Memory (SCM) category. The partnership will center around HP’s Memristor technology and expertise and SanDisk’s non-volatile ReRAM memory technology and manufacturing and design expertise to create new enterprise-wide solutions for Memory-driven Computing. The two companies also will partner in enhancing data center solutions with SSDs.
“The HP Apollo 8000 supercomputing platform approaches HPC from an entirely new perspective as the system is cooled directly with warm water. This is done through a “dry-disconnect” cooling concept that has been implemented with the simple but efficient use of heat pipes. Unlike cooling fans, which are designed for maximum load, the heat pipes can be optimized by administrators. The approach allows significantly greater performance density, cutting energy consumption in half and creating synergies with other building energy systems, relative to a strictly air-cooled system.”