论文标题
对具有可预测性和解释性的自动化车辆的处置性和最初学识渊博的信任进行建模
Modeling Dispositional and Initial learned Trust in Automated Vehicles with Predictability and Explainability
论文作者
论文摘要
汽车行业的技术进步正在使自动驾驶更接近道路使用。但是,影响公众接受自动车辆(AV)的最重要因素之一是公众对AV的信任。许多因素可以影响人们的信任,包括对AV的风险和利益,感受和知识的认识。这项研究旨在使用与1175名参与者进行的调查研究来使用这些因素来预测人们对AV的处置和最初学识渊博的信任。对于每个参与者,从调查问题中提取了23个功能,以捕捉他或她的知识,感知,经验,行为评估和对AV的感觉。然后将这些功能用作训练极端梯度提升(XGBoost)模型以预测对AVS的信任的输入。在Shapley添加说明(SHAP)的帮助下,我们能够解释XGBoost的信任预测,以进一步提高XGBoost模型的解释性。与传统的回归模型和黑盒机器学习模型相比,我们的发现表明,这种方法在同时提供了对AVS的高度解释性和可预测性方面具有强大的功能。
Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public's trust in AVs. Many factors can influence people's trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people's dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his or her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), we were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, our findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously.