This Week in Viz – 8/27/09

Print Friendly, PDF & Email

Randall Hand from VizWorld.com, the web’s best site dedicated to computer graphics and scientific visualization, recap’s the week‘s best stories related to supercomputing in the visualization and graphics industries.  This week he talks about new hires at NVidia, upcoming conferences, and new real-time Holographic rendering.

Quick Links:

SGI Graphics VP Hired by NVidia

Just heard through a reliable grapevine a bit more news on the S?I culling of the Graphic Division.  VP of the Graphic Division, Robert Pette (who we interviewed previously) has been hired by NVidia.  No news yet on his new title, position, or responsibilities, just that he’s now proudly sporting an NVidia shirt.  Here’s his professional Bio, if you’re curious, and ironically it’s taken from the SGI Executive Team page.

Bob Pette is leading SGI’s Visualization Group, providing the vision, design, strategy and direction for all SGI’s visualization products and solutions, including the newly released VUE suite of software. The VUE suite of software provides innovative solutions that help high-performance organizations consolidate and maximize their compute and visualization resources to manage the rapidly growing digital universe, anytime, anyplace and on any device. In his 21-year career at SGI, Bob has held positions in Systems Engineering, Application and Solutions Development, Customer Benchmarking, Customer Services, Services/Sales Operations and Corporate Marketing. As vice president of SGI Global Services, Bob expanded the SGI’s visualization practice via the design, development and implementation of Reality Center environments, simulators, CAVE installations, and immersive auditoriums for industries ranging from aerospace design, defense and intelligence sectors to energy exploration. Bob received his B.S. in Aerospace Engineering from Georgia Tech and his B.S. in Mathematics from the University of Tampa.

Here’s hoping that NVidia will start to take in some more of the VUE product team.  One other interesting side effect of this is that it could remedy a current limitation of the NVidia Drivers on SGI’s upcoming UltraViolet systems: a limit of 16GPU’s.  With more of the old SGI UV team in-house, maybe they’ll get that remedied.

Smashing the Trillion Zone Barrier

Details of the massive VisIt run announced a while back, are starting to come out, and while they still aren’t publishing any concrete details, you can find some interesting details about the systems and testing procedures used:

The VACET team ran the experiments in April and May on six world-class supercomputers (latest TOP500 rankings noted):

Franklin — a 38,128-core Cray XT4 located at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab (No. 11)
JaguarPF — a 149,504-core Cray XT5 at the Oak Ridge Leadership Computing Facility at ORNL (No. 2)
Ranger — a 62,976-core x86_64 Linux system at the Texas Advanced Computing Center at the University of Texas at Austin (No. 8 )
Purple — a 12,288-core IBM Power5 at LLNL (No. 50)
Juno — an 18,432-core x86_64 Linux system at LLNL (No. 19)
Dawn — a 147,456-core BlueGene/P system at LLNL (No. 9)

One thing I quickly noticed from this list: Nothing from SGI.  (I would say Nothing from SUN as well, but I think the Ranger system is SUN).  But, aside from “because we can”, why did they do this? First is the following claim from Wes Bethel:

“The results show that visualization research and development efforts have produced technology that is today capable of ingesting and processing tomorrow’s datasets,” said Berkeley Lab’s E. Wes Bethel, who is co-leader of VACET. “These results are the largest-ever problem sizes and the largest degree of concurrency ever attempted within the DOE visualization research community.”

But more to the point is this:

Another purpose of these runs was to prepare for establishing VisIt’s credentials as a “Joule code,” or a code that has demonstrated scalability at a large number of cores. DOE’s Office of Advanced Scientific Computing Research (ASCR) is establishing a set of such codes to serve as a metric for tracking code performance and scalability as supercomputers are built with tens and hundreds of thousands of processor cores. VisIt is the first and only visual data analysis code that is part of the ASCR Joule metric.

Holographic CPU renders in near real-time

Holographic displays are the ultimate in passive glasses-free 3D, displaying a scene that allows for multiple simultaneous viewing angles based on the actual viewing location without any fancy head tracking, however the details behind making it are incredibly complex.  In a new paper in ‘Optics Express’, Japanese researchers reveal a new video card capable of ‘near real-time’ Holographic renderings.

Well, researchers in Japan have created a graphics card, called the HORN-6, that can do this for you. It consists of four Xilinx field programmable gate arrays (FPGA), each of which has about 7 million gates and a bit of memory (less than 1MB). Each FPGA is connected to 256MB of DDR RAM, while a fifth, smaller FPGA is used to manage the PCI bus.

These FPGAs divide the area of a 1,920 x 1,080 LCD and calculate the intensity of each pixel using a ray-tracing algorithm that also tracks the phase of the light—the phase allows the interference pattern to be calculated. In a nice bit of engineering, as the block size that each FPGA can process (e.g., the local storage limit) is completed in just under the time it takes to fetch the next block from memory. This allows the researchers to keep the FPGA load pretty much constant by prefetching data.

It’s impressive, but the ‘0.08fps’ isn’t really what I’ld call ‘real-time’, but as an early FPGA prototype, it could run especially faster if dedicated hardware is used.

NVidia CEO Predicts GPU Performance boost of 570x

Pushing the power of GPGPU, Jen-Hsun Huang has predicted that GPU computing will see an astounding 570x boost in performance over the next 6 years, while CPU’s will see only 3x.

Huang said at the Hot Chips Symposium in Stanford University that such advances could enable the development of realtime universal language translation devices and advanced forms of augmented reality. A number of applications, including energy exploration, interactive ray tracing and CGI simulations would also benefit from the powerful GPU capabilities.

Of course, these kind of comparisons are kinda like comparing apples and oranges, rather than apples and apples.

VMD 1.8.7, Now with CUDA Acceleration

The Theoretical and Computational Biophysics Group of University of Illinois at Urbana-Champaign has just released VMD 1.8.7, a new version of their amazing molecular simulation package.   The big feature in this new version is support for NVidia’s CUDA, with amazing performance boosts.

One of the key advancements included in VMD 1.8.7 is support for GPU accelerated visualization and analysis, based on NVIDIA CUDA. As reported in several publications, the massively parallel architecture of GPUs makes them ideal devices to accelerate many of the computationally demanding calculations in VMD. The range of acceleration provided by GPUs depends on the capabilities of the specific GPU devices installed, and the details of the calculation. Typical acceleration factors for the algorithms in VMD are: electrostatics 22x to 44x, implicit ligand sampling 20x to 30x, molecular orbital calculation 100x to 120x. Details on making best use of the GPU acceleration capabilities in VMD are provided here.