4.8 Article

Object Tracking Benchmark

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2014.2388226

Keywords

Object tracking; benchmark dataset; performance evaluation

Funding

  1. NSFC [61005027, 61370036]
  2. ICT R&D program of MSIP/IITP [10047078, 14-824-09-006]
  3. ICT R&D program of MSIP/NIPA (CITRC program) [NIPA-2014-H0401-14-1001]
  4. National Science Foundation CAREER [1149783]
  5. National Science Foundation IIS [1152576]
  6. Institute for Information & Communication Technology Planning & Evaluation (IITP), Republic of Korea [R0101-14-0185] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
  7. Korea Evaluation Institute of Industrial Technology (KEIT) [10047078] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
  8. Ministry of Public Safety & Security (MPSS), Republic of Korea [B0101-15-0552, H8601-15-1005] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
  9. Div Of Information & Intelligent Systems
  10. Direct For Computer & Info Scie & Enginr [1149783] Funding Source: National Science Foundation

Ask authors/readers for more resources

Object tracking has been one of the most important and active research areas in the field of computer vision. A large number of tracking algorithms have been proposed in recent years with demonstrated success. However, the set of sequences used for evaluation is often not sufficient or is sometimes biased for certain types of algorithms. Many datasets do not have common ground-truth object positions or extents, and this makes comparisons among the reported quantitative results difficult. In addition, the initial conditions or parameters of the evaluated tracking algorithms are not the same, and thus, the quantitative results reported in literature are incomparable or sometimes contradictory. To address these issues, we carry out an extensive evaluation of the state-of-the-art online object-tracking algorithms with various evaluation criteria to understand how these methods perform within the same framework. In this work, we first construct a large dataset with ground-truth object positions and extents for tracking and introduce the sequence attributes for the performance analysis. Second, we integrate most of the publicly available trackers into one code library with uniform input and output formats to facilitate large-scale performance evaluation. Third, we extensively evaluate the performance of 31 algorithms on 100 sequences with different initialization settings. By analyzing the quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available