Today Clemson University Monday announced that it has been named a CUDA Teaching Center.
This week Nvidia salutes Women who use CUDA for incredible science and engineering. They’ve compiled 30 profiles so far, and the advice they share from their experiences is quite inspiring. “It’s a good way to remind people that women write code, participate in open-source projects, and invent things,” said Lorena Barba from George Washington University. “It’s important to make the technology world more attractive to female students and show them examples of women who are innovators.”
“Discover killer-app fundamentals including how to tame dynamic parallelism with a robust-performance parallel stack that allows both host and device side fast memory allocation and transparent data transfer of arbitrarily complex data structures and general C++ classes. A low-wait approach (related to wait-free methods)is used to create a performance robust parallel counter. You definitely want to use this counter for histograms! New results extending machine learning and big data analysis to 13 PF/s average sustained performance using 16,384 GPUs in the ORNL Titan supercomputer will be presented.”
Mark Harris from Nvidia presents this talk from SC13. “The performance and efficiency of CUDA, combined with a thriving ecosystem of programming languages, libraries, tools, training, and services, have helped make GPU computing a leading HPC technology. Learn how powerful new features in CUDA 6 make GPU computing easier than ever, helping you accelerate more of your application with much less code.”
Today Nvidia announced CUDA 6, the latest version of the company’s parallel computing platform designed to make parallel programming easier than ever.
Today Allinea Software announced support for version 5.5 of the NVIDIA CUDA parallel programming toolkit. The new release includes debugging support for C++11, GNU 4.8 compilers, and ARMv7 architectures, which will soon power hybrid platforms with lower energy consumption for HPC.