4.7 Article

The Successor Representation: Its Computational Logic and Neural Substrates

期刊

JOURNAL OF NEUROSCIENCE
卷 38, 期 33, 页码 7193-7200

出版社

SOC NEUROSCIENCE
DOI: 10.1523/JNEUROSCI.0151-18.2018

关键词

cognitive map; dopamine; hippocampus; reinforcement learning; reward

资金

  1. National Institutes of Health [CRCNS R01-1207833]
  2. Office of Naval Research [N000141712984]
  3. Alfred P. Sloan Research Fellowship
  4. U.S. Department of Defense (DOD) [N000141712984] Funding Source: U.S. Department of Defense (DOD)

向作者/读者索取更多资源

Reinforcement learning is the process by which an agent learns to predict long-term future reward. We now understand a great deal about the brain's reinforcement learning algorithms, but we know considerably less about the representations of states and actions over which these algorithms operate. A useful starting point is asking what kinds of representations we would want the brain to have, given the constraints on its computational architecture. Following this logic leads to the idea of the successor representation, which encodes states of the environment in terms of their predictive relationships with other states. Recent behavioral and neural studies have provided evidence for the successor representation, and computational studies have explored ways to extend the original idea. This paper reviews progress on these fronts, organizing them within a broader framework for understanding how the brain negotiates tradeoffs between efficiency and flexibility for reinforcement learning.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据