3D mobile from the inside out

John C. Tanner
17 Feb 2011

3D isn't just for cinemas and flatscreen TVs - it's also coming to smartphones and tablets, and you won't need special glasses. Paul Costigan, chief operating officer and APAC president of 3D chipset company Movidius, talks to Wireless Asia editor John C Tanner about what it takes to enable 3D video in cell phones, and why this time around, 3D will outlast the novelty factor that usually drives it before it fizzles out.

Wireless Asia: Let's start with what Movidius does in the 3D space.

Paul Costigan: To explain that, it's easier to step back and explain how 3D works. One of the objections people have with 3D is fatigue and discomfort, and a lot of that is down to technology. What you're doing in 3D is sending a different image to each eye, and your eyes are used to processing the same image and using depth cues to see it and converging it all. If you present two images, if there's any difference in focus or color or angular rotation, your brain gets seriously confused. So there's techniques like rectification and convergence and other things we have to do.

What do rectification and convergence do?

Rectification basically fixes errors during the capture process. We can't just capture an image like with 2D, there's a hell of a lot of stuff that has to be done first.

Convergence is the process of deciding what you're going to concentrate on when you're looking at an image. One solution for this is called "fixed convergence", where they converge at about 1 to 4 meters. Our solution is different: we decide what the main object is the frame is and focus on that. That means we can offer a 3D viewing experience from 75 cm to infinity. So if you point our camera to trees in the distance, for example, we can still give a depth effect for those distant objects, whereas simpler 3D systems can't do that.

Also, when you mount a camera on a phone, you can have up to seven mounting errors on a lot of different axes: up and down, in and out, and rotational errors. So we have to compensate for those, which eases the job of the manufacturer. We also have to compensate for things like lens angle variance and color balancing.

Related content

No Comments Yet! Be the first to share what you think!