期刊
AUTONOMOUS ROBOTS
卷 43, 期 2, 页码 309-326出版社
SPRINGER
DOI: 10.1007/s10514-018-9771-0
关键词
Transparency; Explainable artificial intelligence; Human-robot interaction; Inverse reinforcement learning
资金
- Intel Labs
- Air Force [16RT0676]
- NSF CAREER Award [1351028]
- DARPA XAI
- Berkeley Deep Drive consortium
- NSF Fellowship
- Div Of Information & Intelligent Systems
- Direct For Computer & Info Scie & Enginr [1351028] Funding Source: National Science Foundation
The overarching goal of this work is to efficiently enable end-users to correctly anticipate a robot's behavior in novel situations. And since a robot's behavior is often a direct result of its underlying objective function, our insight is that end-users need to have an accurate mental model of this objective function in order to understand and predict what the robot will do. While people naturally develop such a mental model over time through observing the robot act, this familiarization process may be lengthy. Our approach reduces this time by having the robot model how people infer objectives from observed behavior, in order to then show those behaviors that are maximally informative. We introduce two factors to define candidate models of human inference, and show that certain models indeed produce example robot behaviors that better enable users to anticipate what it will do in novel situations. Our results also reveal that choosing the appropriate model is key, and suggest that our candidate models do not fully capture how humans extrapolate from examples of robot behavior. We leverage these findings to propose a stronger model of human learning in this setting, and conclude by analyzing the impact of different ways in which the assumed model of human learning may be incorrect.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据