论文标题
长期视觉本地化的实时融合框架
A Real-Time Fusion Framework for Long-term Visual Localization
论文作者
论文摘要
视觉定位是一项基本任务,它可以回归具有图像功能的6个自由度(6DOF),以便在许多机器人应用中提供高精度定位请求。堕落的条件像运动模糊,照明变化和环境变化在此任务中面临巨大挑战。与其他信息(例如顺序信息和惯性测量单元(IMU)输入)融合将极大地帮助此类问题。在本文中,我们提出了一个有效的客户服务器视觉本地化体系结构,该体系结构融合了全球和本地姿势估计,以实现有希望的精度和效率。我们在映射和全球姿势回归模块中包括其他几何图形提示,以提高测量质量。采用松散的融合政策来利用计算的复杂性和准确性。我们对两个典型的开源基准,4个季节和露天进行了评估。定量结果证明,我们的框架在其他最先进的视觉定位解决方案方面具有竞争性能。
Visual localization is a fundamental task that regresses the 6 Degree Of Freedom (6DoF) poses with image features in order to serve the high precision localization requests in many robotics applications. Degenerate conditions like motion blur, illumination changes and environment variations place great challenges in this task. Fusion with additional information, such as sequential information and Inertial Measurement Unit (IMU) inputs, would greatly assist such problems. In this paper, we present an efficient client-server visual localization architecture that fuses global and local pose estimations to realize promising precision and efficiency. We include additional geometry hints in mapping and global pose regressing modules to improve the measurement quality. A loosely coupled fusion policy is adopted to leverage the computation complexity and accuracy. We conduct the evaluations on two typical open-source benchmarks, 4Seasons and OpenLORIS. Quantitative results prove that our framework has competitive performance with respect to other state-of-the-art visual localization solutions.