Depth-constrained Feature-based Stitching System for Stereoscopic Panoramic Video Generation
thesisposted on 2019-12-01, 00:00 authored by Haoyu Wang
The thesis presents one feature-based stitching system for high-quality stereoscopic panoramic video generation. Although panorama stitching is a well-studied topic, and various algorithms and software are available to create high-quality monocular panoramas, generalizing those methods for both stereo and video modes is not an easy task. One satisfying output of the stitching system needs to meet several requirements in different senses. For every single frame, the output monocular-view panorama should have minimal spatial stitching errors or distortion. For every pair of output stereoscopic panorama, no vertical disparity can be perceived, and the horizontal disparity should be appropriately distributed across the scenario. For the output video, discontinuities, such as shakiness or abrupt changes of depth between consecutive frames, are not desired. The main contribution of our work consists of the definition of depth-constrained feature in stitching framework, the introduction of human-visual interest to control point refinement, and the post-stitching correction of the artifact and depth anomaly. First, the stitching, based on the proposed depth-constrained feature, can ensure fewer visible artifacts in the generated monocular panorama and better stereo consistency between the left and right views. Furthermore, we utilize human visual sensitivity to refine and qualify the input control points and tracking results of the Kanade-Lucas-Tomasi tracker in the video sequences. Finally, the pixel-based monocular geometric correction and feature-based depth control enable us to minimize all the visible stitching errors and adjust the perceived depth from the stereoscopic panoramic video into one reasonable range.