Right now it feels like 3D playback is simply playing left/right images on a flat plane inside an already 3D space. This seems to cause eye convergence issues, especially when subjects are close to the camera. How about dynamically using the left/right image to create a depth map, and have multiple layered screens in 3D space that display THAT distance's pixels ONLY. I think this would alleviate the convergence issues and make watching 3D video less tiring and more natural.