4.8 Article

How multisensory neurons solve causal inference

Publisher

NATL ACAD SCIENCES
DOI: 10.1073/pnas.2106235118

Keywords

causal inference; multisensory integration; MSTd; visual and vestibular; deep neural network

Funding

  1. Leverhulme Trust [ECF-2017-573]
  2. Isaac Newton Trust [17.08(o)]
  3. Australian Research Council [DE210100790]
  4. Deutsche Forschungsgemeinschaft [SFB-TRR-135]
  5. Alexander von Humboldt fellowship
  6. Australian Research Council [DE210100790] Funding Source: Australian Research Council

Ask authors/readers for more resources

The brain faces the challenge of determining whether sensations of motion have a single cause or multiple causes, integrating vestibular and visual cues for more precise estimation of self-motion. The macaque medial superior temporal area contains neurons that encode combinations of vestibular and visual motion cues, with some responding to congruent cues and others to opposite cues. A neural network model trained for causal inference in motion estimation exhibits both congruent and opposite units, showing the importance of the balance between their activities in determining whether cues should be integrated or separated.
Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (congruent neurons), while others prefer opposing directions (opposite neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available