论文标题

深度强化学习中的解释性,对当前方法和应用的审查

Explainability in Deep Reinforcement Learning, a Review into Current Methods and Applications

论文作者

Hickling, Thomas, Zenati, Abdelhafid, Aouf, Nabil, Spencer, Phillippa

论文摘要

自2015年首次介绍以来,深入增强学习(DRL)方案的使用已大大增加。尽管在许多不同的应用程序中使用了使用,但缺乏可解释性仍然存在问题。面包缺乏对研究人员和公众使用DRL解决方案使用的理解和信任。为了解决这个问题,出现了可解释的人工智能(XAI)领域。这需要各种不同的方法,这些方法旨在打开DRL黑匣子,从使用可解释的符号决策树(DT)到诸如Shapley值之类的数值方法。这篇评论研究了使用哪些方法以及用于哪些应用。这样做是为了确定哪些模型最适合每个应用程序,或者是否未充分利用方法。

The use of Deep Reinforcement Learning (DRL) schemes has increased dramatically since their first introduction in 2015. Though uses in many different applications are being found, they still have a problem with the lack of interpretability. This has bread a lack of understanding and trust in the use of DRL solutions from researchers and the general public. To solve this problem, the field of Explainable Artificial Intelligence (XAI) has emerged. This entails a variety of different methods that look to open the DRL black boxes, ranging from the use of interpretable symbolic Decision Trees (DT) to numerical methods like Shapley Values. This review looks at which methods are being used and for which applications. This is done to identify which models are the best suited to each application or if a method is being underutilised.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源