Journal
NEUROCOMPUTING
Volume 257, Issue -, Pages 115-127Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2016.10.073
Keywords
Tracking; CNN; Spatial-temporal; Saliency; Sampling
Categories
Funding
- National Natural Science Foundation, China [61571362, 61363046, 61403182]
- National Research Foundation
- Prime Minister's Office, Singapore under International Research Centre in Singapore Funding Initiative
- Jiangxi Provincial Department of Science and Technology [20153BCB23029]
Ask authors/readers for more resources
Arbitrary tracking is hard due to nonstop intrinsic and extrinsic variations in realistic scenarios. Even for the popular tracking-by-learning strategies, effective appearance modeling of the non-rigid objects is still challenging because of the targets' articulatory deformations on-the-fly, which may heavily degrade the discriminative capability of the online generated visual features. With widely emerged deep learning showing its success for feature extraction in different recognition tasks, more and more deep models such as CNN have been demonstrated contributive to improving the performance of online tracking. However, only depending on the outputs from last layer of CNN is not an optimum representation since the coarse spatial resolution cannot guarantee an accurate localization for a qualified sampling process, especially when objects have severe deformations, sampling from the region with a pre-defined scale would inevitably guide a poor online learning. To overcome such a limitation of CNN based tracking, in this work, We incorporated spatial-temporal saliency detection to guide a more accurate target localization for qualified sampling within an inter-frame motion flow map. With an optional strategy for the output combifiation of intra-frame appearance correlations and inter-frame motion saliency based on a compositional energy optimization, the proposed tracking has shown a superior performance in comparison to the other state-of-art trackers on both challenging non-rigid and generic tracking benchmark datasets. (C) 2017 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available