Sign up for our newsletter and get the latest HPC news and analysis.

Clemson Becomes the Latest CUDA Teaching Center

Josh Levine shows how GPUs work in his McAdams Hall office at Clemson University.

Today Clemson University Monday announced that it has been named a CUDA Teaching Center.

RCE Podcast on the Numba Just-in-time Compiler for Accelerating Python

RCE Podcast logo

In this RCE Podcast, Brock Palen and Jeff Squyres discuss the Numba just-in-time compiler with Stanley Seibert from Continuum Analytics.

OLCF Seeking Teams for OpenACC Hackathon

hackathon

OLCF is sponsoring an OpenACC Hackathon the week of Oct. 27.

CUDA Fortran Managed Memory with PGI 14.7

unified memory

Over at the PGIinsider, Brent LeBack writes that new PGI Compiler release 14.7 enables Unified Memory in CUDA Fortran.

Nvidia Unveils Quadro GPUs for Visual Computing

displaymedia

Today Nvidia announced its next generation series of Quadro GPUs for visual computing. With up to twice the application performance and data-handling capability of the previous generation, the new line comprises the K5200, K4200, K2200, K620 and K420 model GPUs.

Nvidia Reveals Details on 64-Bit Project Denver Chip

Denver-Hot-Chips-Core

Nvidia revealed new architectural details of the 64-bit version of Project Denver at the HOT CHIPS conference this week. “While Denver is described as a Mobile chip, Stam claims that its performance will rival some mainstream PC-class CPUs at significantly reduced power consumption. That sounds to me like an interesting building block for HPC clusters.”

Why Big Data is Really Big Parallelism

Robin Bloor

“Moore’s Law got deflected in 2004, when it became no longer practical to ramp up the clock speed of CPUs to improve performance. So the chip industry improved CPU performance by adding more processors to a chip in concert with miniaturization. This was extra power, but you could not leverage it easily without building parallel software. Virtual machines could use multicore chips for server consolidation of light workloads, but large workloads needed parallel architectures to exploit the power. So, the software industry and the hardware industry moved towards exploiting parallelism in ways they had not previously done. This is the motive force behind the Big Data.”

This Week in HPC: AMD Surprises with High-Performance GPU and Fujitsu Unveils Teraflop SPARC Chip

this week in hpc

In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research look at the new FirePro S9150 GPU from AMD and Fujitsu’s new Teraflop SPARC64 XIfx chip.

Video: Designing for the Future-GPU Solutions

Tau Leng, VP / GM of HPC at Supermicro

In this video from GTC Japan 2014, Tau Leng from Supermicro presents: Designing for the Future-GPU Solutions.

Is AMD Back in HPC? New FirePro GPU Does 2.53 Tflops

amd-firepro-s9150-server-graphics

Today AMD announced their new FirePro S9150 GPU with 2.52 Teraflops of double-precision performance and a maximum power consumption of 235 watts. “AMD FirePro S9150 ushers in a new era of supercomputing. Its memory configuration, compute capabilities and performance per watt are unmatched in its class, and can help take supercomputers to the next level of performance and energy efficiency.”