4.8 Article

Object Detection with Discriminatively Trained Part-Based Models

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2009.167

Keywords

Object recognition; deformable models; pictorial structures; discriminative training; latent SVM

Funding

  1. US National Science Foundation [IIS 0746569, IIS 0811340, IIS 0812428]
  2. Direct For Computer & Info Scie & Enginr [0812428] Funding Source: National Science Foundation
  3. Direct For Computer & Info Scie & Enginr
  4. Div Of Information & Intelligent Systems [0746569] Funding Source: National Science Foundation
  5. Div Of Information & Intelligent Systems [0812428] Funding Source: National Science Foundation
  6. Div Of Information & Intelligent Systems
  7. Direct For Computer & Info Scie & Enginr [0811340, 1215812] Funding Source: National Science Foundation

Ask authors/readers for more resources

We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI-SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available