论文标题

利用无监督的3D点云完成的单视图像

Leveraging Single-View Images for Unsupervised 3D Point Cloud Completion

论文作者

Wu, Lintai, Zhang, Qijian, Hou, Junhui, Xu, Yong

论文摘要

扫描设备捕获的点云通常由于阻塞而不完整。为了克服这一限制,已经开发了点云完成方法,以根据对象的部分输入来预测对象的完整形状。这些方法可以广泛地归类为受监督或无监督的。但是,这两个类别都需要大量的3D完整点云,这可能很难捕获。在本文中,我们提出了Cross-PCC,这是一种无监督的点云完成方法,而无需任何3D完整的点云。我们仅利用完整对象的2D图像,与3D完整和清洁点云相比,它们更容易捕获。具体来说,要利用来自2D图像的互补信息,我们使用单视RGB图像来提取2D功能并设计融合模块以融合从部分点云中提取的2D和3D功能。为了指导预测点云的形状,我们将对象的预测点投影到2D平面,并使用其轮廓图的前景像素来限制投影点的位置。为了减少预测点云的离群值,我们提出一个视图校准器,将投影到背景的点通过单视图silhouette图像将其移动到前景中。据我们所知,我们的方法是不需要任何3D监督的第一点云完成方法。我们方法的实验结果优于最先进的无监督方法的实验结果。此外,我们的方法甚至可以达到与某些监督方法相当的性能。我们将在https://github.com/ltwu6/cross-pcc上公开提供源代码。

Point clouds captured by scanning devices are often incomplete due to occlusion. To overcome this limitation, point cloud completion methods have been developed to predict the complete shape of an object based on its partial input. These methods can be broadly classified as supervised or unsupervised. However, both categories require a large number of 3D complete point clouds, which may be difficult to capture. In this paper, we propose Cross-PCC, an unsupervised point cloud completion method without requiring any 3D complete point clouds. We only utilize 2D images of the complete objects, which are easier to capture than 3D complete and clean point clouds. Specifically, to take advantage of the complementary information from 2D images, we use a single-view RGB image to extract 2D features and design a fusion module to fuse the 2D and 3D features extracted from the partial point cloud. To guide the shape of predicted point clouds, we project the predicted points of the object to the 2D plane and use the foreground pixels of its silhouette maps to constrain the position of the projected points. To reduce the outliers of the predicted point clouds, we propose a view calibrator to move the points projected to the background into the foreground by the single-view silhouette image. To the best of our knowledge, our approach is the first point cloud completion method that does not require any 3D supervision. The experimental results of our method are superior to those of the state-of-the-art unsupervised methods by a large margin. Moreover, our method even achieves comparable performance to some supervised methods. We will make the source code publicly available at https://github.com/ltwu6/cross-pcc.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源