Sign up for our newsletter and get the latest HPC news and analysis.

Interview: GPU Technology Conference Enters 5th Year with Over 100 HPC Sessions


The upcoming GPU Technology Conference is entering it’s fifth year with developer talks on everything from Numerical Algorithms to Big Data Analytics. “In short, we’ll have a ton of HPC content. There are nearly 100 sessions dedicated to supercomputing and HPC topics. This includes major scientific research enabled by these GPU-accelerate systems – everything from breakthroughs in cancer research and astronomy, to HIV research and new big data analytics innovation.”

Slidecast: New PGI 2014 Release Adds OpenACC 2.0 Features and x64 Performance Gains

Doug Miles

In this slidecast, Doug Miles from Nvidia describes the new features and performance gains in the PGI 2014 release. “The use of accelerators in high performance computing is now mainstream,” said Douglas Miles, director of PGI Software at Nvidia. “With PGI 2014, we are taking another big step toward our goal of providing platform-independent, multi-core and accelerator programming tools that deliver outstanding performance on multiple platforms without the need for extensive, device-specific tuning.”

ACM Webcast: Achieve Massively Parallel Acceleration with GPUs

Mark Ebersole

ACM is continuing its popular webcast series with a talk on “Achieve Massively Parallel Acceleration with GPUs” by Nvidia’s Mark Ebersole at 1 pm ET on Thursday, February 27.

Bill Dally Presents: Scientific Computing on GPUs


Bill Dally from Nvidia presented this talk at the Stanford HPC Conference. “HPC and data analytics share challenges of power, programmability, and scalability to realize their potential. The end of Dennard scaling has made all computing power limited, so that performance is determined by energy efficiency. With improvements in process technology offering little increase in efficiency innovations in architecture and circuits are required to maintain the expected performance scaling.”

An HSA Overview


Over at the Stream Computing Blog, Vincent Hindriksen has posted an overview of the Heterogeneous Systems Architecture (HSA). “HSA changes the way memory is handled by eliminating a hierarchy in processing-units. In a hUMA architecture, the CPU and the GPU (inside the APU) have full access to the entire system memory.”

Neuroscientist to Keynote at GPU Technology Conference


George Millington writes that one of the world’s leading researchers in how brain deficits that accrue with age will be a featured speaker at next month’s GPU Technology Conference in California.

How New Features in CUDA 6 Make GPU Acceleration Easier


Mark Harris from Nvidia presents this talk from SC13. “The performance and efficiency of CUDA, combined with a thriving ecosystem of programming languages, libraries, tools, training, and services, have helped make GPU computing a leading HPC technology. Learn how powerful new features in CUDA 6 make GPU computing easier than ever, helping you accelerate more of your application with much less code.”

Video: 20 Petaflop Simulation of Protein Suspensions in Crowding Conditions


“The simulations were performed on the Titan system at the Oak Ridge National Laboratory, and exhibits excellent scalability up to 18,000 K20X NVIDIA GPUs, reaching 20 Petaflops of aggregate sustained performance with a peak performance of 27.5 Petaflops for the most intensive computing component.”

Programming GPUs Directly from Python Using NumbaPro


“NumbaPro is a powerful compiler that takes high-level Python code directly to the GPU producing fast-code that is the equivalent of programming in a lower-level language. It contains an implementation of CUDA Python as well as higher-level constructs that make it easy to map array-oriented code to the parallel architecture of the GPU.”

Hurry! Early Bird Rates End Soon for GPU Technology Conference

Fans of accelerated computing are reminded to take advantage of the Early Bird deadline for 2014 GPU Tech Conference ends January 29.