论文标题
在结构化地形上运动的神经场景表示
Neural Scene Representation for Locomotion on Structured Terrain
论文作者
论文摘要
我们提出了一种基于学习的方法,以通过穿越城市环境的移动机器人来重建当地地形进行机车。该算法使用机器人摄像头和机器人轨迹的深度测量流,估计机器人附近的地形。这些相机的原始测量值嘈杂,仅提供部分和遮挡的观察结果,在许多情况下,这些观察结果并未显示机器人所占据的地形。因此,我们提出了一个3D重建模型,该模型忠实地重建了场景,尽管嘈杂的测量和大量丢失的数据来自相机布置的盲点。该模型由点云上的4D完全卷积网络组成,该网络学习了几何先验,以从上下文完成场景和自动回归反馈,以利用时空的一致性并使用过去的证据。该网络只能通过合成数据对网络进行训练,并且由于广泛的增强,在现实世界中它在四足机器人(Anymal)验证中所示,它在现实世界中是强大的。我们使用有效的稀疏张量实现在机器人的板载低功率计算机上运行管道,并表明所提出的方法的表现优于经典地图表示。
We propose a learning-based method to reconstruct the local terrain for locomotion with a mobile robot traversing urban environments. Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the algorithm estimates the topography in the robot's vicinity. The raw measurements from these cameras are noisy and only provide partial and occluded observations that in many cases do not show the terrain the robot stands on. Therefore, we propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement. The model consists of a 4D fully convolutional network on point clouds that learns the geometric priors to complete the scene from the context and an auto-regressive feedback to leverage spatio-temporal consistency and use evidence from the past. The network can be solely trained with synthetic data, and due to extensive augmentation, it is robust in the real world, as shown in the validation on a quadrupedal robot, ANYmal, traversing challenging settings. We run the pipeline on the robot's onboard low-power computer using an efficient sparse tensor implementation and show that the proposed method outperforms classical map representations.