论文标题
通过机器学习和可解释的-AI预测和理解人类行动决策
Predicting and Understanding Human Action Decisions during Skillful Joint-Action via Machine Learning and Explainable-AI
论文作者
论文摘要
这项研究使用监督的机器学习(SML)和可解释的人工智能(AI)来建模,预测和理解熟练联合行动期间的人类决策。长期的短期记忆网络经过培训,以预测完成二元放牧任务的专家和新手参与者的目标选择决策。结果表明,受过训练的模型是专业知识的,不仅可以准确地预测专家和新手牧民的目标选择决策,而且可以在演员有意识的时间之前的时间表上这样做。为了了解如何区分专家和新手参与者的目标选择决策,然后我们采用了可解释的AI技术,Shapley添加性解释,以确定信息特征(变量)对模型预测的重要性。这项分析表明,与新手相比,专家受到有关其共同主人公状态的信息的影响更大。讨论了使用SML和可解释的AI技术来调查人类决策的实用性。
This study uses supervised machine learning (SML) and explainable artificial intelligence (AI) to model, predict and understand human decision-making during skillful joint-action. Long short-term memory networks were trained to predict the target selection decisions of expert and novice actors completing a dyadic herding task. Results revealed that the trained models were expertise specific and could not only accurately predict the target selection decisions of expert and novice herders but could do so at timescales that preceded an actor's conscious intent. To understand what differentiated the target selection decisions of expert and novice actors, we then employed the explainable-AI technique, SHapley Additive exPlanation, to identify the importance of informational features (variables) on model predictions. This analysis revealed that experts were more influenced by information about the state of their co-herders compared to novices. The utility of employing SML and explainable-AI techniques for investigating human decision-making is discussed.