论文标题
对比度学习图像和激光雷达之间的特征
Contrastive Learning of Features between Images and LiDAR
论文作者
论文摘要
图像和点云为机器人提供了不同的信息。从不同传感器的数据之间找到对应关系对于各种任务,例如本地化,映射和导航至关重要。基于学习的描述符已为单个传感器开发;跨模式功能几乎没有工作。这项工作将学习跨模式特征视为一个密集的对比度学习问题。我们为跨模式特征学习提出了元组圆损失函数。此外,为了学习良好的功能而不是失去通用性,我们开发了用于点云的广泛使用的PointNet ++体系结构的变体,用于图像。此外,我们在现实世界数据集上进行实验,以显示损失函数和网络结构的有效性。我们表明,我们的模型确实通过可视化功能从图像和激光雷达学习信息。
Image and Point Clouds provide different information for robots. Finding the correspondences between data from different sensors is crucial for various tasks such as localization, mapping, and navigation. Learning-based descriptors have been developed for single sensors; there is little work on cross-modal features. This work treats learning cross-modal features as a dense contrastive learning problem. We propose a Tuple-Circle loss function for cross-modality feature learning. Furthermore, to learn good features and not lose generality, we developed a variant of widely used PointNet++ architecture for point cloud and U-Net CNN architecture for images. Moreover, we conduct experiments on a real-world dataset to show the effectiveness of our loss function and network structure. We show that our models indeed learn information from both images as well as LiDAR by visualizing the features.