4.5 Article

Enabling robots to communicate their objectives

期刊

AUTONOMOUS ROBOTS
卷 43, 期 2, 页码 309-326

出版社

SPRINGER
DOI: 10.1007/s10514-018-9771-0

关键词

Transparency; Explainable artificial intelligence; Human-robot interaction; Inverse reinforcement learning

资金

  1. Intel Labs
  2. Air Force [16RT0676]
  3. NSF CAREER Award [1351028]
  4. DARPA XAI
  5. Berkeley Deep Drive consortium
  6. NSF Fellowship
  7. Div Of Information & Intelligent Systems
  8. Direct For Computer & Info Scie & Enginr [1351028] Funding Source: National Science Foundation

向作者/读者索取更多资源

The overarching goal of this work is to efficiently enable end-users to correctly anticipate a robot's behavior in novel situations. And since a robot's behavior is often a direct result of its underlying objective function, our insight is that end-users need to have an accurate mental model of this objective function in order to understand and predict what the robot will do. While people naturally develop such a mental model over time through observing the robot act, this familiarization process may be lengthy. Our approach reduces this time by having the robot model how people infer objectives from observed behavior, in order to then show those behaviors that are maximally informative. We introduce two factors to define candidate models of human inference, and show that certain models indeed produce example robot behaviors that better enable users to anticipate what it will do in novel situations. Our results also reveal that choosing the appropriate model is key, and suggest that our candidate models do not fully capture how humans extrapolate from examples of robot behavior. We leverage these findings to propose a stronger model of human learning in this setting, and conclude by analyzing the impact of different ways in which the assumed model of human learning may be incorrect.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据