4.7 Article

Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps

期刊

ARTIFICIAL INTELLIGENCE
卷 301, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.artint.2021.103571

关键词

Explainable AI; Strategy summarization; Saliency maps; Reinforcement learning; Deep learning

资金

  1. Israeli Science Foundation [2185/20]
  2. Deutsche Forschungsgemeinschaft (DFG) [392401413]

向作者/读者索取更多资源

This paper investigates how to explain the behavior of reinforcement learning agents, combining global and local explanation methods and evaluating their effects through user studies. The study finds that the choice of which states to include in the summary has a significant impact on people's understanding of agents, while adding saliency maps did not significantly improve performance in most cases.
With advances in reinforcement learning (RL), agents are now being developed in high stakes application domains such as healthcare and transportation. Explaining the behavior of these agents is challenging, as the environments in which they act have large state spaces, and their decision-making can be affected by delayed rewards, making it difficult to analyze their behavior. To address this problem, several approaches have been developed. Some approaches attempt to convey the global behavior of the agent, describing the actions it takes in different states. Other approaches devised local explanations which provide information regarding the agent's decision-making in a particular state. In this paper, we combine global and local explanation methods, and evaluate their joint and separate contributions, providing (to the best of our knowledge) the first user study of combined local and global explanations for RL agents. Specifically, we augment strategy summaries that extract important trajectories of states from simulations of the agent with saliency maps which show what information the agent attends to. Our results show that the choice of what states to include in the summary (global information) strongly affects people's understanding of agents: participants shown summaries that included important states significantly outperformed participants who were presented with agent behavior in a set of world-states that are likely to appear during gameplay. We find mixed results with respect to augmenting demonstrations with saliency maps (local information), as the addition of saliency maps, in the form of raw heat maps, did not significantly improve performance in most cases. However, we do find some evidence that saliency maps can help users better understand what information the agent relies on during its decision-making, suggesting avenues for future work that can further improve explanations of RL agents. (C) 2021 The Authors. Published by Elsevier B.V.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据