In this video from the 2013 GPU Technology Conference, Dustin Franklin from GE Intelligent Platforms presents: GPUDirect Support for RDMA and Green Multi-GPU Architectures. View the GE Presentation Slides on Slideshare.
In this follow-up podcast to the GPU Technology Conference, the Radio Free HPC team mulls over a talk by GE’s Dustin Franklin, GPU app specialist. Dustin’s topic was GPU-direct RDMA; was this a first look at real-world RDMA with GPU-to-GPU communications?
Follow along as the guys describe flow charts on technical slides that are not yet approved by viewing for the “great unwashed masses” – but make no mistake, they’re impressed by what they saw. Dan “knows a guy” who can divulge more, and offers to arrange an inquisition with Henry. Henry promised to “be nice,” whatever he means by that. Rich missed this GTC session and several others while “conducting interviews,” whatever he means by that. Dan offers another characterization. And this just in: there’s a great deal of information available on the Internet.
Jim Ison, VP of Sales at One Stop Systems, walks us through their monster PCIe expansion chassis that can hold up to 16 full size GPU cards. In order to do this, This product is truly an innovative design – packing this amount of processing power into only 3U worth of rack space.
One of the coolest things was a demo of their new Scorpii project. Scorpii is a visualization system. At the show, it used two systems with six GPUs to generate a Toy-based molecular dynamics model and another system with three GPUs to project the model on nine displays in real time. It’s an affordable platform that allows researchers to generate their simulations and visualize the results quickly, rather than wait hours for the program to execute on a traditional supercomputer. On the video, Tim Thomas, physicist and Deputy Director of UNM Advanced Research Computing (also a CreativeC consultant) walks me through the simulation.
DigiCortexis my hobby project implementing large-scale simulation and visualization of biologically realistic cortical neurons, synaptic receptor kinetic, axonal action potential propagation delays as well as long-term and short-term synaptic plasticity. Current version of DigiCortex is heavily optimized for Intel CPUs (including Sandy Bridge AVX instruction set). The first CUDA-enabled version with GPU acceleration (CUDA optimizations done by Ana Balevic) is available as of v0.95
The simulation footage in this video is really gorgeous, so be sure to watch it in HD mode. Read the Full Story.
In this video from the GPU Technology Conference, David Ingersol from Penguin Computing describes the company’s new Relion 2808GT server, which packs 8 GPUs into a 2U server chassis for High Performance Computing.
One of my favorite talks this week from the GPU Technology Conference was a presentation from Matthew Gueller from Harley-Davidson. Over at the Nvidia Blog, Ken Brown writes that Harley is using GPUs for 3D modeling that cuts months off its design cycle.
Harley-Davidson has been designing and manufacturing motorcycles for over 110 years. While the motorcycles designs remain true to the heritage, the process has evolved to incorporate many new tools into the conceptual design process to reduce the time required to develop new products, improve styling intent and to allow for greater conceptual exploration. By leveraging tools from Bunkspeed, Keyshot, Autodesk, Daussalt and others, we have added flexibility to our process for delivering high quality designs earlier. This presentation will go thru some of the conceptual design workflows and show how Harley-Davidson uses visualization tools to bring it all together. Feedback on GPU vs CPU performance benchmarking done at Harley-Davidson and how these tools are leveraged will be provided.
In this video, CUDA book author Rob Farber discusses the recent Nvidia keynote at the 2013 GPU Technology Conference. As a technologist, Rob thinks some of the things that weren’t said by Nvidia CEO Jen-Hsun Huang during the talk are very significant in terms of high performance computing and the business of accelerated computing.
Ralph Gilles, senior vice president – Product Design and president and CEO – SRT (Street and Racing Technology) Brand and Motorsports at Chrysler Group LLC and the mind behind some of the company’s most innovative products, will provide a behind-the-scenes look at the auto industry. Gilles will review how GPUs are used to advance every step of the automobile development process – from the initial conceptual designs and engineering phases through product assembly and marketing. He will also discuss and how Chrysler Group utilizes GPUs and the latest technologies to build better, safer cars and reduce time to market.
“Introducing the Kayla Platform for computing on the ARM architecture – where supercomputing meets mobile computing. The Kayla platform is powered by an NVIDIA Tegra Quad-core ARM processor and a Kepler GPU to deliver the highest performance, highest efficiency for the next generation of CUDA and OpenGL application. Pre-installed with CUDA 5 and supporting OpenGL 4.3, it provides ARM applications development across the widest range of application types. The Kayla platform will be available Spring 2013.”
Over at The Register, Timothy Prickett Morgan writes that Nvidia has announced plans to stack up DRAM on future ‘Volta’ GPUs to deliver over 1TB/sec of memory bandwidth. Due sometime around 2016, Volta’s memory technology will bring memory closer to the GPU, increasing bandwidth while reducing latency.
Volta is going to solve one of the biggest issues with GPUs today, which is access to memory bandwidth,” explained Huang. “The memory bandwidth on a GPU is already several times that of a CPU, but we never seem to have enough.” So with Volta, Nvidia is going to get the memory closer to the GPU so signals do not have to come out of the GPU, onto a circuit board, and into the GDDR memory. This current approach takes more power (you have to pump up the signal to make it travel over the board), introduces latencies, decreases bandwidth.
In related projects, Micron, Intel, and IBM are partnering on an effort to stack up DRAM, with hopes to commercialize something in the next few years. Read the Full Story.