论文标题

LCCNET:LIDAR和相机自校准使用成本量网络

LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network

论文作者

Lv, Xudong, Wang, Boya, Ye, Dong, Wang, Shuo

论文摘要

在本文中,我们提出了一种新颖的在线自我校准方法,用于光检测和射程(LIDAR)和相机传感器。与以前的基于CNN的方法相比,将RGB图像的特征图和去校准的深度图像串联的方法相比,我们利用了受PWC-NET启发的成本量来进行特征匹配。除了预测的外部校准参数的光滑L1损失外,还应用了额外的点云损失。我们预测了从初始校准到地面真相的偏差偏差,而不是直接在LiDAR和相机之间回归外部参数。在推断过程中,随着迭代精致和时间滤波方法的使用,校准误差会进一步降低。 KITTI数据集的评估结果表明,我们的方法的表现优于基于CNN的最先进方法,即平均绝对校准误差为0.297厘米,转换为0.017°,旋转率为0.017°,而错过的幅度高达1.5m和20°。

In this paper, we propose a novel online self-calibration approach for Light Detection and Ranging (LiDAR) and camera sensors. Compared to the previous CNN-based methods that concatenate the feature maps of the RGB image and decalibrated depth image, we exploit the cost volume inspired by the PWC-Net for feature matching. Besides the smooth L1-Loss of the predicted extrinsic calibration parameters, an additional point cloud loss is applied. Instead of regress the extrinsic parameters between LiDAR and camera directly, we predict the decalibrated deviation from initial calibration to the ground truth. During inference, the calibration error decreases further with the usage of iterative refinement and the temporal filtering approach. The evaluation results on the KITTI dataset illustrate that our approach outperforms CNN-based state-of-the-art methods in terms of a mean absolute calibration error of 0.297cm in translation and 0.017° in rotation with miscalibration magnitudes of up to 1.5m and 20°.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源