论文标题

无契约场景的无监督的单眼深度重建

Unsupervised Monocular Depth Reconstruction of Non-Rigid Scenes

论文作者

Takmaz, Ayça, Paudel, Danda Pani, Probst, Thomas, Chhatkuli, Ajad, Oswald, Martin R., Van Gool, Luc

论文摘要

复杂和动态场景的单眼深度重建是一个极具挑战性的问题。尽管对于僵化的场景,但基于学习的方法即使在无监督的情况下也提供了令人有希望的结果,但对于动态和可变形场景而言,几乎没有或没有文献来解决这一目标。在这项工作中,我们提出了一个无监督的单眼框架,以对动态场景进行密集的深度估计,该框架共同重建刚性和非刚性零件,而无需显式建模相机运动。使用密集的对应关系,我们得出了一个训练目标,旨在在重建的3D点之间进行机会性地保持成对距离。在此过程中,使用可行的假设隐式地学习了密集的深度图。我们的方法提供了令人鼓舞的结果,证明了其从非刚性场景的挑战性视频中重建3D的能力。此外,提出的方法还提供了无监督的运动分割结果作为辅助输出。

Monocular depth reconstruction of complex and dynamic scenes is a highly challenging problem. While for rigid scenes learning-based methods have been offering promising results even in unsupervised cases, there exists little to no literature addressing the same for dynamic and deformable scenes. In this work, we present an unsupervised monocular framework for dense depth estimation of dynamic scenes, which jointly reconstructs rigid and non-rigid parts without explicitly modelling the camera motion. Using dense correspondences, we derive a training objective that aims to opportunistically preserve pairwise distances between reconstructed 3D points. In this process, the dense depth map is learned implicitly using the as-rigid-as-possible hypothesis. Our method provides promising results, demonstrating its capability of reconstructing 3D from challenging videos of non-rigid scenes. Furthermore, the proposed method also provides unsupervised motion segmentation results as an auxiliary output.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源