4.6 Article

Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor

Journal

SENSORS
Volume 12, Issue 7, Pages 8640-8662

Publisher

MDPI
DOI: 10.3390/s120708640

Keywords

Kinect (TM) sensor; depth sensing camera; 3D acquisition; uncertainty model; visual feature; depth calibration; disparity map; point cloud

Ask authors/readers for more resources

This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect (TM) sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect (TM) sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect (TM) sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect (TM) sensor applications.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available