An Approach to Democratizing HPC-Style Computing

In this sponsored post, Ehsan Totoni, CTO of Bodo.ai, discusses opportunities to integrate a parallelization approach – capable of scaling to 10k cores or more – into popular cloud-based Data Warehouses to help speed large-scale analytics and ELT computing. And, because the model can be engineered for various special-purpose hardware and accelerators, there are opportunities to apply this to GPUs and FPGAs for media processing and encoding as well.

NERSC Finalizes Contract for Perlmutter Supercomputer

NERSC has moved another step closer to making Perlmutter — its next-generation GPU-accelerated supercomputer — available to the science community in 2020. In mid-April, NERSC finalized its contract with Cray — which was acquired by Hewlett Packard Enterprise (HPE) in September 2019 — for the new system, a Cray Shasta supercomputer that will feature 24 […]

CUDA-Python and RAPIDS for blazing fast scientific computing

Abe Stern from NVIDIA gave this talk at the ECSS Symposium. “We will introduce Numba and RAPIDS for GPU programming in Python. Numba allows us to write just-in-time compiled CUDA code in Python, giving us easy access to the power of GPUs from a powerful high-level language. RAPIDS is a suite of tools with a Python interface for machine learning and dataframe operations. Together, Numba and RAPIDS represent a potent set of tools for rapid prototyping, development, and analysis for scientific computing. We will cover the basics of each library and go over simple examples to get users started.”

NVIDIA DGX-2 Delivers Record Performance on STAC-A3 Benchmark

Today NVIDIA announced record performance on STAC-A3, the financial services industry benchmark suite for backtesting trading algorithms to determine how strategies would have performed on historical data. “Using an NVIDIA DGX-2 system running accelerated Python libraries, NVIDIA shattered several previous STAC-A3 benchmark results, in one case running 20 million simulations on a basket of 50 instruments in the prescribed 60-minute test period versus the previous record of 3,200 simulations.”

Rapids: Data Science on GPUs

Christoph Angerer from NVIDIA gave this talk at FOSDEM’19. “The next big step in data science will combine the ease of use of common Python APIs, but with the power and scalability of GPU compute. The RAPIDS project is the first step in giving data scientists the ability to use familiar APIs and abstractions while taking advantage of the same technology that enables dramatic increases in speed in deep learning. This session highlights the progress that has been made on RAPIDS, discusses how you can get up and running doing data science on the GPU, and provides some use cases involving graph analytics as motivation.”

IBM’s Plan to bring Machine Learning Capabilities to Data Scientists Everywhere

Over at the IBM Blog, IBM Fellow Hillary Hunter writes that the company anticipates that the world’s volume of digital data will exceed 44 zettabytes, an astounding number. “IBM has worked to build the industry’s most complete data science platform. Integrated with NVIDIA GPUs and software designed specifically for AI and the most data-intensive workloads, IBM has infused AI into offerings that clients can access regardless of their deployment model. Today, we take the next step in that journey in announcing the next evolution of our collaboration with NVIDIA. We plan to leverage their new data science toolkit, RAPIDS, across our portfolio so that our clients can enhance the performance of machine learning and data analytics.”