4.7 Article

Vision-based place recognition: how low can you go?

Journal

INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
Volume 32, Issue 7, Pages 766-789

Publisher

SAGE PUBLICATIONS LTD
DOI: 10.1177/0278364913490323

Keywords

place recognition; vision; SeqSLAM; low resolution; vision-based place recognition

Categories

Funding

  1. Australian Research Council [DE120100995]
  2. Australian Research Council [DE120100995] Funding Source: Australian Research Council

Ask authors/readers for more resources

In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available