论文标题
事件-VPR:基于事件的视觉位置识别的端到端弱监督的网络体系结构
Event-VPR: End-to-End Weakly Supervised Network Architecture for Event-based Visual Place Recognition
论文作者
论文摘要
传统的视觉位置识别(VPR)方法通常使用基于框架的摄像机,由于急剧的照明变化或快速运动,这很容易失败。在本文中,我们为事件摄像机提出了一个端到端的视觉位置识别网络,该网络可以在具有挑战性的环境中实现良好的位置识别性能。提出算法的关键思想首先是用EST素素网格表征事件流,然后使用卷积网络提取特征,最后使用改进的VLAD网络汇总功能,以使用事件流实现端到端的视觉位置识别。为了验证所提出的算法的有效性,我们将所提出的方法与基于事件的驾驶数据集(MVSEC,DDD17)和合成数据集(Oxford Robotcar)进行比较。实验结果表明,在具有挑战性的情况下,提出的方法可以取得更好的性能。据我们所知,这是第一个基于事件的VPR方法。随附的源代码可在https://github.com/kongdelei/event-vpr上获得。
Traditional visual place recognition (VPR) methods generally use frame-based cameras, which is easy to fail due to dramatic illumination changes or fast motions. In this paper, we propose an end-to-end visual place recognition network for event cameras, which can achieve good place recognition performance in challenging environments. The key idea of the proposed algorithm is firstly to characterize the event streams with the EST voxel grid, then extract features using a convolution network, and finally aggregate features using an improved VLAD network to realize end-to-end visual place recognition using event streams. To verify the effectiveness of the proposed algorithm, we compare the proposed method with classical VPR methods on the event-based driving datasets (MVSEC, DDD17) and the synthetic datasets (Oxford RobotCar). Experimental results show that the proposed method can achieve much better performance in challenging scenarios. To our knowledge, this is the first end-to-end event-based VPR method. The accompanying source code is available at https://github.com/kongdelei/Event-VPR.