论文标题

DFR-TSD:一个基于深度学习的框架,可在挑战性的天气条件下进行稳健的交通标志检测

DFR-TSD: A Deep Learning Based Framework for Robust Traffic Sign Detection Under Challenging Weather Conditions

论文作者

Ahmed, Sabbir, Kamal, Uday, Hasan, Md. Kamrul

论文摘要

强大的交通标志检测和识别(TSDR)对于成功实现自动驾驶汽车技术至关重要。这项任务的重要性导致了大量的研究工作,现有文献中提出了许多有希望的方法。但是,SOTA(SOTA)方法已在无挑战的数据集上进行了评估,并忽略了与不同挑战性条件(CC)相关的性能恶化,这些恶化掩盖了野外捕获的交通图像。在本文中,我们查看CCS下的TSDR问题,并专注于与之相关的性能降解。为了克服这一点,我们提出了一个基于卷积的神经网络(CNN)的TSDR框架,并提高了先前的增强。我们的模块化方法由基于CNN的挑战分类器,Enhance-NET,用于图像增强的编码器CNN体系结构以及两个用于签名检测和分类的CNN架构。我们提出了一条新型的训练管道,以增强网络,该管道重点是在充满挑战的图像中,以其准确的检测,在充满挑战的图像中的交通标志区域(而不是整个图像)。我们使用了由不同CC捕获的流量视频组成的CURE-TSD数据集来评估我们方法的功效。我们通过实验表明,与当前的基准相比,我们的方法的总体精度和召回率分别为7.58%和70.71%,分别为7.58%和35.90%。此外,我们将方法与SOTA对象检测网络(更快的RCNN和R-FCN)进行了比较,并表明我们的方法的表现优于它们。

Robust traffic sign detection and recognition (TSDR) is of paramount importance for the successful realization of autonomous vehicle technology. The importance of this task has led to a vast amount of research efforts and many promising methods have been proposed in the existing literature. However, the SOTA (SOTA) methods have been evaluated on clean and challenge-free datasets and overlooked the performance deterioration associated with different challenging conditions (CCs) that obscure the traffic images captured in the wild. In this paper, we look at the TSDR problem under CCs and focus on the performance degradation associated with them. To overcome this, we propose a Convolutional Neural Network (CNN) based TSDR framework with prior enhancement. Our modular approach consists of a CNN-based challenge classifier, Enhance-Net, an encoder-decoder CNN architecture for image enhancement, and two separate CNN architectures for sign-detection and classification. We propose a novel training pipeline for Enhance-Net that focuses on the enhancement of the traffic sign regions (instead of the whole image) in the challenging images subject to their accurate detection. We used CURE-TSD dataset consisting of traffic videos captured under different CCs to evaluate the efficacy of our approach. We experimentally show that our method obtains an overall precision and recall of 91.1% and 70.71% that is 7.58% and 35.90% improvement in precision and recall, respectively, compared to the current benchmark. Furthermore, we compare our approach with SOTA object detection networks, Faster-RCNN and R-FCN, and show that our approach outperforms them by a large margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源