论文标题

可解释的AI用于量身定制的电力消耗反馈 - 可视化的实验评估

Explainable AI for tailored electricity consumption feedback -- an experimental evaluation of visualizations

论文作者

Wastensteiner, Jacqueline, Weiss, Tobias M., Haag, Felix, Hopf, Konstantin

论文摘要

机器学习(ML)方法可以有效地分析数据,识别其中的模式并做出高质量的预测。良好的预测通常与无法以人类可读的方式呈现检测到的模式一起进行。技术发展最近导致了可解释的人工智能(XAI)技术,旨在打开这样的黑盒,并使人类能够从检测到的模式中获得新的见解。我们研究了XAI在特定见解可能对消费者行为产生重大影响的领域,即用电。知道对个人电力消耗的具体反馈触发了资源的保护,我们从电力消耗时间序列中创建了五个可视化,考虑到现有特定领域的设计知识,从电力消耗时间序列中创建了五个可视化。我们对152名参与者进行的实验评估表明,人类可以通过XAI可视化显示的模式吸收,但是这种可视化应遵循已知的可视化模式,以便用户妥善理解。

Machine learning (ML) methods can effectively analyse data, recognize patterns in them, and make high-quality predictions. Good predictions usually come along with "black-box" models that are unable to present the detected patterns in a human-readable way. Technical developments recently led to eXplainable Artificial Intelligence (XAI) techniques that aim to open such black-boxes and enable humans to gain new insights from detected patterns. We investigated the application of XAI in an area where specific insights can have a significant effect on consumer behaviour, namely electricity use. Knowing that specific feedback on individuals' electricity consumption triggers resource conservation, we created five visualizations with ML and XAI methods from electricity consumption time series for highly personalized feedback, considering existing domain-specific design knowledge. Our experimental evaluation with 152 participants showed that humans can assimilate the pattern displayed by XAI visualizations, but such visualizations should follow known visualization patterns to be well-understood by users.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源