Today Nvidia announced that growing ranks of Python users can now take full advantage of GPU acceleration for HPC and Big Data analytics applications by using the CUDA parallel programming model. As a popular, easy-to-use language, Python enables users to write high-level software code that captures their algorithmic ideas without delving deep into programming details. Python’s extensive libraries and advanced features make it ideal for a broad range of HPC science, engineering and big data analytics applications.
Our research group typically prototypes and iterates new ideas and algorithms in Python and then rewrites the algorithm in C or C++ once the algorithm is proven effective,” said Vijay Pande, professor of Chemistry and of Structural Biology and Computer Science at Stanford University. “CUDA support in Python enables us to write performance code while maintaining the productivity offered by Python.”
Support for CUDA parallel programming comes from NumbaPro, a Python compiler in the new Anaconda Accelerate product from Continuum Analytics. This support was made possible by Nvidia’s contribution of the CUDA compiler source code into the core and parallel thread execution backend of LLVM, a widely used open source compiler infrastructure. Read the Full Story.