4.1 Article

Empirical priors for reinforcement learning models

Journal

JOURNAL OF MATHEMATICAL PSYCHOLOGY
Volume 71, Issue -, Pages 1-6

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jmp.2016.01.006

Keywords

Bayesian statistics; Q-learning; Parameter estimation; Model comparison

Funding

  1. Center for Brains, Minds and Machines (CBMM) - NSF STC award [CCF-1231216]

Ask authors/readers for more resources

Computational models of reinforcement learning have played an important role in understanding learning and decision making behavior, as well as the neural mechanisms underlying these behaviors. However, fitting the parameters of these models can be challenging: the parameters are not identifiable, estimates are unreliable, and the fitted models may not have good predictive validity. Prior distributions on the parameters can help regularize estimates and to some extent deal with these challenges, but picking a good prior is itself challenging. This paper presents empirical priors for reinforcement learning models, showing that priors estimated from a relatively large dataset are more identifiable, more reliable, and have better predictive validity compared to model-fitting with uniform priors. (C) 2016 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available