This Week in Vis

Print Friendly, PDF & Email

This week we’ve got news on processing photographs into 3D models, massive shared filesystems, and building transformers.

ORNL implements shared filesystems in Spider

Anyone working in Data Analysis and Visualization will tell you that the #1 problem facing them is file storage.  As the datasets get bigger and bigger, moving them from the HPC’s to the Visualization Resources becomes a bigger pain.  Oak Ridge National Labs has been facing this problem for a while now, and has just recently stood up a distributed fileserver named ‘Spider’ to fix this.

Once a project ran an application on Jaguar, it then had to move the data to the Lens visualization platform for analysis. Any problem encountered along the way would necessitate that the cumbersome process be repeated. With Spider connected to both Jaguar and Lens, however, this headache is avoided. “You can think of it as eliminating islands of data. Instead of having to multiply file systems all within the NCCS, one for each of our simulation platforms, we have a single file system that is available anywhere. If you are using extremely large data sets on the order of 200 terabytes, it could save you hours and hours.”

While this is nice, it still doesn’t solve the problem of then maintaining that data in Memory.  But at least you don’t have to spend a month waiting on an FTP to finish anymore.

Think your code is slow? Try building a Transformer.

Visual Effects Supervisor Scott Farrar sat down with the folks at Bollywood Hungama to talk about how they worked with the physical constraints in Transformers 2, such as measuring the pyramids and mockups of the Robots to use for Actor’s eyelines.  Towards the end, the interviewer asked one interesting question:

Can you give an idea of the time and money it takes to bring just one of the robots to life for 10 seconds?

That is a great question because no matter how many times they appear in the movie it takes a certain amount of work. It takes roughly about six months to put a robot together. This may be surprising, but you have to build all the pieces. It is like going into your workshop and making those parts, except it is a computer graphics workshop. The men and women who make these characters, make the shapes and those shapes have compound curves, which is complicated. Then some shapes have 4 to 16 layers of information in the computer, so that it looks like plastic or glass or shiny chrome or brushed steel, plus all the pigments of colour. That is a lot of stuff, for every piece. The building of it is one thing, that takes 12 to 16 weeks, and then you go into paint and textures. Then there are the people who connect all the pieces and that can take even longer. You have to work it all out so that basically the skeleton hangs together in the computer.

Nobody thought building a transformer was “quick”, but 6 months. Wow.

Turn Photographs into 3D Models

David McKinnon, a researcher from Queensland University of Technology, has developed a software tool called 3DSee that can take a collection of ordinary 2D photographs and process them into a 3D model with surprising accuracy.

Dr McKinnon said the software automatically locates and tracks common points between the images allowing a determination of where the cameras were when the photos were taken. This information is then used to create a 3D model from the images using graphics cards to massively accelerate the computations.

A nice application of GPGPU computing.  However, not just any images will do.  According to Dr McKinnon, it requires 5-15 images, each overlapping by a minimum of 80%.  Essentially, it sounds like he needs video slowly panning around the object.

If the accuracy is high enough, I can envision this replacing (or supplementing) alot of 3D Scanning technology used by the graphics and mechanical engineering community.

Lockheed Martin wins DARPA Augmented Reality Contract

DARPA, the Pentagon’s mad-scientist division, has awarded Lockheed Martin $1m to develop “daylight-readable, see-through, low-profile, ergonomic” color video glasses.  That’s a tall order, and Lockheed Martin will be working with Microvision to build it.

The Lockheed-Microvision deal is part of a US military project named Urban Leader Tactical Response, Awareness & Visualization (ULTRA-Vis). It’s intended to equip American combat troops not only with see-through video specs but also with a cunning “gesture recognition” interface allowing squad leaders to effectively scribble on the real world – for instance marking a door, and having the same mark show up in their teammates’ specs as well.

The US military has been investing in Virtual Reality & Augmented Reality for years, but hopefully whatever they come up with will eventually make it down to public consumer level.