4.5 Article

A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

Journal

FRONTIERS IN NEUROROBOTICS
Volume 11, Issue -, Pages -

Publisher

FRONTIERS MEDIA SA
DOI: 10.3389/fnbot.2017.00020

Keywords

path integration; artificial intelligence; insect navigation; neural networks; reward-based learning

Funding

  1. Centre for BioRobotics (CBR) at University of Southern Denmark (SDU, Denmark)
  2. Fundacdo para a Ciencia e Tecnologia (FCT)
  3. Bernstein Center for Computational Neuroscience II Gottingen (BCCN) [01GQ1005A]
  4. Horizon Framework Programme [732266]

Ask authors/readers for more resources

Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available