论文标题
使用COLABEL的建设性解释性:佐证整合,互补特征和协作学习
Constructive Interpretability with CoLabel: Corroborative Integration, Complementary Features, and Collaborative Learning
论文作者
论文摘要
越来越多地寻求具有可解释预测的机器学习模型,尤其是对于需要偏见检测和降低风险的现实任务,关键任务应用程序。固有的可解释性,即模型是从基础上设计的,可以解释性,它为模型预测和性能提供了直观的见解和透明解释。在本文中,我们提出了科拉贝尔,这是一种建立可解释模型的方法,其解释扎根于地面真理。我们在车辆制造模型识别(VMMR)的背景下在车辆功能提取应用中演示了Colabel。 Colabel具有可解释的功能(例如车辆颜色,类型和制造)的复合作用,所有这些都基于地面真相标签的可解释注释。首先,Colabel执行佐证集成,以连接多个数据集,每个数据集都有所需的颜色,类型和制作的所需注释子集。然后,科贝尔使用可分解的分支来提取与所需注释相对应的互补特征。最后,科贝尔将他们融合在一起以进行最终预测。在功能融合过程中,Colabel统一了互补的分支,以使VMMR特征彼此兼容,并且可以投影到相同的语义空间进行分类。具有固有的可解释性,Colabel在compcars,cars196和boxcars116k上的准确性分别为0.98、0.95和0.94。科贝尔提供了由于建设性的解释性而提供直观的解释,并随后在关键任务中实现了高准确性和可用性。
Machine learning models with explainable predictions are increasingly sought after, especially for real-world, mission-critical applications that require bias detection and risk mitigation. Inherent interpretability, where a model is designed from the ground-up for interpretability, provides intuitive insights and transparent explanations on model prediction and performance. In this paper, we present CoLabel, an approach to build interpretable models with explanations rooted in the ground truth. We demonstrate CoLabel in a vehicle feature extraction application in the context of vehicle make-model recognition (VMMR). CoLabel performs VMMR with a composite of interpretable features such as vehicle color, type, and make, all based on interpretable annotations of the ground truth labels. First, CoLabel performs corroborative integration to join multiple datasets that each have a subset of desired annotations of color, type, and make. Then, CoLabel uses decomposable branches to extract complementary features corresponding to desired annotations. Finally, CoLabel fuses them together for final predictions. During feature fusion, CoLabel harmonizes complementary branches so that VMMR features are compatible with each other and can be projected to the same semantic space for classification. With inherent interpretability, CoLabel achieves superior performance to the state-of-the-art black-box models, with accuracy of 0.98, 0.95, and 0.94 on CompCars, Cars196, and BoxCars116K, respectively. CoLabel provides intuitive explanations due to constructive interpretability, and subsequently achieves high accuracy and usability in mission-critical situations.