vScaler has incorporated NVIDIA’s new RAPIDS open source software into its cloud platform for on-premise, hybrid, and multi-cloud environments. Deployable via its own Docker container in the vScaler Cloud management portal, the RAPIDS suite of software libraries gives users the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. “The new RAPIDS library offers Python interfaces which will leverage the NVIDIA CUDA platform for acceleration across one or multiple GPUs. RAPIDS also focuses on common data preparation tasks for analytics and data science. This includes a familiar DataFrame API that integrates with a variety of machine learning algorithms for end-to-end pipeline accelerations without paying typical serialization costs. RAPIDS also includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes.”
Open Source RAPIDS GPU Platform to Accelerate Predictive Data Analytics
Today NVIDIA announced a GPU-acceleration platform for data science and machine learning, with broad adoption from industry leaders, that enables even the largest companies to analyze massive amounts of data and make accurate business predictions at unprecedented speed. “It integrates seamlessly into the world’s most popular data science libraries and workflows to speed up machine learning. We are turbocharging machine learning like we have done with deep learning,” he said.
ISC 2018: NVIDIA DGX-2 — The World’s Most Powerful AI System on Display
In this video, Satinder Nijjar from NVIDIA describes the new DGX-2 GPU supercomputer. “Experience new levels of AI speed and scale with NVIDIA DGX-2, the first 2 petaFLOPS system that combines 16 fully interconnected GPUs for 10X the deep learning performance. It’s powered by NVIDIA DGX software and a scalable architecture built on NVIDIA NVSwitch, so you can take on the world’s most complex AI challenges.”
Mellanox powers NVIDIA DGX-2 AI Supercomputer
Today Mellanox announced that the company’s InfiniBand and Ethernet solutions have been chosen to accelerate the new NVIDIA DGX-2 artificial intelligence system. DGX-2 is the first 2 Petaflop system that combines sixteen GPUs and eight Mellanox ConnectX adapters, supporting both EDR InfiniBand and 100 gigabit Ethernet connectivity. The technological advantages of the Mellanox adapters with smart acceleration engines enable the highest performance for AI and Deep Learning applications.
Inside the new NVIDIA DGX-2 Supercomputer with NVSwitch
In this video from the GPU Technology Conference, Marc Hamilton from NVIDIA describes the new DGX-2 supercomputer with the NVSwitch interconnect. “NVIDIA NVSwitch is the first on-node switch architecture to support 16 fully-connected GPUs in a single server node and drive simultaneous communication between all eight GPU pairs at an incredible 300 GB/s each. These 16 GPUs can be used as a single large-scale accelerator with 0.5 Terabytes of unified memory space and 2 petaFLOPS of deep learning compute power. With NVSwitch, we have 2.4 terabytes a second bisection bandwidth, 24 times what you would have with two DGX-1s.”
Video: NVIDIA Unveils DGX-2 Supercomputer
In this video, NVIDIA CEO Jensen Huang unveils the DGX-2 supercomputer. Combined with a fully optimized, updated suite of NVIDIA deep learning software, DGX-2 is purpose-built for data scientists pushing the outer limits of deep learning research and computing. “Watch to learn how we’ve created the first 2 petaFLOPS deep learning system, using NVIDIA NVSwitch to combine the power of 16 V100 GPUs for 10X the deep learning performance.”
NVIDIA Announces DGX-2 as the “First 2 Petaflop Deep Learning System”
Today NVIDIA unveiled the NVIDIA DGX-2: the “world’s largest GPU.” Ten times faster than its predecessor, the DGX-2 the first single server capable of delivering two petaflops of computational power. DGX-2 has the deep learning processing power of 300 servers occupying 15 racks of datacenter space, while being 60x smaller and 18x more power efficient.