论文标题
det-slam:使用detectron2进行高度动态场景的语义视觉大满贯
Det-SLAM: A semantic visual SLAM for highly dynamic scenes using Detectron2
论文作者
论文摘要
根据专家的说法,同时定位和映射(SLAM)是自主机器人系统的内在组成部分。在过去的几十年中,已经发明和使用了一些具有令人印象深刻的性能的SLAM系统。但是,仍然存在尚未解决的问题,例如如何在动态情况下处理移动对象。经典的大满贯系统取决于静态环境的假设,静态环境在高度动态的情况下变得不可行。近年来已经提出了几种解决此问题的方法,但每种方法都有其局限性。这项研究结合了视觉大满贯系统ORB-SLAM3和DETECTRON2呈现DET-SLAM系统,该系统采用了深度信息和语义分割来识别和消除动态点,以实现语义SLAM,以实现动态情况。对公共TUM数据集的评估表明,DET-SLAM比以前的动态SLAM系统更有弹性,并且可以降低动态室内场景中相机姿势的估计误差。
According to experts, Simultaneous Localization and Mapping (SLAM) is an intrinsic part of autonomous robotic systems. Several SLAM systems with impressive performance have been invented and used during the last several decades. However, there are still unresolved issues, such as how to deal with moving objects in dynamic situations. Classic SLAM systems depend on the assumption of a static environment, which becomes unworkable in highly dynamic situations. Several methods have been presented to tackle this issue in recent years, but each has its limitations. This research combines the visual SLAM systems ORB-SLAM3 and Detectron2 to present the Det-SLAM system, which employs depth information and semantic segmentation to identify and eradicate dynamic spots to accomplish semantic SLAM for dynamic situations. Evaluation of public TUM datasets indicates that Det-SLAM is more resilient than previous dynamic SLAM systems and can lower the estimated error of camera posture in dynamic indoor scenarios.