论文标题
XC:探索3D对象检测中解释的定量用例
XC: Exploring Quantitative Use Cases for Explanations in 3D Object Detection
论文作者
论文摘要
经常使用可解释的AI(XAI)方法来获得有关深层模型预测的定性见解。但是,这种见解需要由人类观察者解释为有用。在本文中,我们旨在直接使用解释来做出决策,而无需人类观察员。我们采用两种基于梯度的解释方法,即集成梯度(IG)和反向物,用于3D对象检测的任务。然后,我们提出了一组定量度量,称为解释浓度(XC)分数,可用于下游任务。这些得分量化了检测到对象边界内的归因浓度。我们通过区分Kitti和Waymo数据集中检测到的对象的任务来评估XC分数的有效性。结果表明,与其他启发式方法(如随机猜测和边界框中的激光点数)相比,两个数据集的提高了100 \%,从而提高了对XC在更多用例中应用的潜力的信心。我们的结果还表明,当使用定量使用与更简单的方法相比,计算昂贵的XAI方法(例如IG)可能不会更有价值。
Explainable AI (XAI) methods are frequently applied to obtain qualitative insights about deep models' predictions. However, such insights need to be interpreted by a human observer to be useful. In this paper, we aim to use explanations directly to make decisions without human observers. We adopt two gradient-based explanation methods, Integrated Gradients (IG) and backprop, for the task of 3D object detection. Then, we propose a set of quantitative measures, named Explanation Concentration (XC) scores, that can be used for downstream tasks. These scores quantify the concentration of attributions within the boundaries of detected objects. We evaluate the effectiveness of XC scores via the task of distinguishing true positive (TP) and false positive (FP) detected objects in the KITTI and Waymo datasets. The results demonstrate an improvement of more than 100\% on both datasets compared to other heuristics such as random guesses and the number of LiDAR points in the bounding box, raising confidence in XC's potential for application in more use cases. Our results also indicate that computationally expensive XAI methods like IG may not be more valuable when used quantitatively compare to simpler methods.