Not logged in.
Quick Search - Contribution
Contribution Details
Type | Master's Thesis |
Scope | Discipline-based scholarship |
Title | Unsupervised Monocular Depth Reconstruction of Non-Rigid Scenes |
Organization Unit | |
Authors |
|
Supervisors |
|
Language |
|
Institution | University of Zurich |
Faculty | Faculty of Business, Economics and Informatics |
Date | 2022 |
Abstract Text | The reconstruction of depth for complex, non-rigid and dynamic scenes using monocular videos is a particularly challenging problem. While learning-based approaches have shown promising results for rigid scenes in both supervise and unsupervised setting, limited work has been published to deal with dynamic and non-rigid scenes. In addition, most existing unsupervised methods for static or dynamic scene require calibrated cameras, which are not available for real-world use cases, such as YouTube videos. Our work presents an unsupervised monocular framework for dense depth estimation of dynamic scenes, which jointly reconstructs rigid and nonrigid components without explicitly modeling camera motion in an un-calibrated camera setting. Our approach follows Takmaz et al. [48], in which we take the as-rigid-as possible prior to minimize the 3D pairwise distance preservation loss across frames. Unlike Takmaz et al. [48], our modified network can accommodate multi-video training and learns camera intrinsic using Mendonca Cipolla [31] autocalibration process. The proposed method has shown promising results and demonstrated its ability to reconstruct depth from un-calibrated challenging videos (youtube videos) of complex and dynamic scenes. Additionally, the proposed method also provides motion segmentation mask as secondary output. Lastly, we adopted teacher-student training modules to provide inferences on unseen videos. |
PDF File | Download |
Export | BibTeX |