4.8 Article

Meta-learning synaptic plasticity and memory addressing for continual familiarity detection

Journal

NEURON
Volume 110, Issue 3, Pages 544-+

Publisher

CELL PRESS
DOI: 10.1016/j.neuron.2021.11.009

Keywords

-

Categories

Funding

  1. NIH [T32-NS064929]
  2. National Science Foundation (NSF) NeuroNex award [DBI-1707398]
  3. Gatsby Charitable Foundation [GAT3708]
  4. Simons Collaboration for the Global Brain
  5. Simons Foundation
  6. NIH Research Facility Improvement Grant [1G20RR030893-01]
  7. New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) [C090171]

Ask authors/readers for more resources

This article investigates the mechanisms of memory encoding and storage, as well as potential solutions for continual learning. Through experiments and model design, it is found that anti-Hebbian plasticity performs better for memory, and a combinatorial addressing function is proposed to select the storage and retrieval of memory. This network operates continuously and can generalize to intervals it has not been trained on.
Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To explore potential mechanisms, we consider a familiarity detection task in which a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian plasticity and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning and demonstrates an effective application of machine learning for neuroscience discovery.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available