Journal
ELIFE
Volume 8, Issue -, Pages -Publisher
eLIFE SCIENCES PUBL LTD
DOI: 10.7554/eLife.47463
Keywords
-
Categories
Funding
- Schweizerischer Nationalfonds zur Forderung der Wissenschaftlichen Forschung [CRSII2 147636, CRSII2 200020 165538]
- Horizon 2020 Framework Programme Human Brain Project (SGA2) [785907]
- European Research Council [268 689]
- Horizon 2020 Framework Programme Human Brain Project (SGA1) [720270]
Ask authors/readers for more resources
In many daily tasks, we make multiple decisions before reaching a goal. In order to learn such sequences of decisions, a mechanism to link earlier actions to later reward is necessary. Reinforcement learning (RL) theory suggests two classes of algorithms solving this credit assignment problem: In classic temporal-difference learning, earlier actions receive reward information only after multiple repetitions of the task, whereas models with eligibility traces reinforce entire sequences of actions from a single experience (one-shot). Here, we show one-shot learning of sequences. We developed a novel paradigm to directly observe which actions and states along a multi-step sequence are reinforced after a single reward. By focusing our analysis on those states for which RL with and without eligibility trace make qualitatively distinct predictions, we find direct behavioral (choice probability) and physiological (pupil dilation) signatures of reinforcement learning with eligibility trace across multiple sensory modalities.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available