4.2 Article

IQ-VIO: adaptive visual inertial odometry via interference quantization under dynamic environments

Journal

INTELLIGENT SERVICE ROBOTICS
Volume -, Issue -, Pages -

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s11370-023-00478-2

Keywords

Localization; Visual inertial odometry; Dynamic simultaneous localization and mapping (SLAM); Extended Kalman filter (EKF)

Categories

Ask authors/readers for more resources

In this paper, a novel adaptive visual inertial odometry method called IQ-VIO is proposed to eliminate the impact of dynamic objects on localization. The method quantifies the confidence of pose estimation through vision frames analysis and adjusts the measurement error covariance matrix adaptively. Experimental results show that the proposed IQ-VIO algorithm outperforms other algorithms and achieves higher positioning accuracy and robustness.
Vision-based localization is susceptible to interference from dynamic objects in the environment, resulting in decreased localization accuracy and even tracking loss. Hence, sensor fusion with IMUs or motor encoders has been widely adopted to improve positioning accuracy and robustness in dynamic environments. Commonly used loose coupling fusion localization methods cannot completely eliminate the error introduced by dynamic objects. In this paper, we propose a novel adaptive visual inertial odometry via interference quantization, namely IQ-VIO. To quantify the confidence of pose estimation through vision frames analysis, we firstly introduce the feature coverage and the dynamic scene interference index based on image information entropy. Then, based on the interference index, we further establish the IQ-VIO multi-sensor fusion model, which can adaptively adjust the measurement error covariance matrix of an extended Kalman filter to suppress and eliminate the impact of dynamic objects on localization. We verify IQ-VIO algorithm on KAIST Urban dataset and actual scenes. Results show that our method achieves favorable performance against other algorithms. Especially under challenging scenes such as low texture, the RPE of our algorithm decreases at least twenty percent. Our approach can effectively eliminate the impact of dynamic objects in the scenes and obtain higher positioning accuracy and robustness than conventional methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available