In the scope of this thesis we present a working system that allows for hybrid 3D reconstruction for geometry-based Free Viewpoint Video. The primary building block of the system framework is a multi-projector multi-camera array that performs real-time 3D reconstruction based on the principle of phase shifted structured light. Additionally utilizing disparity information generated from two separate color camera pairs for image guided phase unwrapping in a hybrid fashion increases the overall system performance, namely reconstruction accuracy and framerate. The array is comprised of six cameras and two projectors in total. For these, an automated calibration procedure has been developed, which allows for fast repositioning and recalibration of the cameras and the projectors. We present an efficient way to effectively remove or conceal remaining errors in the 3D reconstruction once the geometry-based imagery is rendered for Free Viewpoint Video synthesis. For additional improvementof the final video quality the dynamic 3D data generated with the real-time system is combined with a highly accurate static background model of the scene.
We compare a multitude of 3D scanning, 3D rendering and 3D display technologies. Our system is able to accommodate all of the later, including stereoscopic, auto-stereoscopic, volumetric and even future holographic displays. In order to realize immersive Free Viewpoint Video in the absence of available large scale full parallax holographic displays, we have developed a compact head tracking framework. This framework calculates the head position of a single person or viewer relative to the display and renders the 3D data onto this display so it appears as if the viewer sees the scene not on a flatscreen display but that he sees it through an actual real world window.
We compare a multitude of 3D scanning, 3D rendering and 3D display technologies. Our system is able to accommodate all of the later, including stereoscopic, auto-stereoscopic, volumetric and even future holographic displays. In order to realize immersive Free Viewpoint Video in the absence of available large scale full parallax holographic displays, we have developed a compact head tracking framework. This framework calculates the head position of a single person or viewer relative to the display and renders the 3D data onto this display so it appears as if the viewer sees the scene not on a flatscreen display but that he sees it through an actual real world window.