Journal
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
Volume 37, Issue -, Pages 46-52Publisher
ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jvcir.2015.06.012
Keywords
Latent low-rank representation; Sparse learning; Subspace clustering; Robust recovery; Visual analysis; Augmented Lagrangian Multiplier method; Feature extraction; Outlier detection
Funding
- Zhejiang Provincial Natural Science Foundation of China [LQ15F020012]
- National Key Technology Support Program of China [2012BAI34B03]
Ask authors/readers for more resources
Robust recovery of subspace structures from noisy data has received much attention in visual analysis recently. To achieve this goal, previous works have developed a number of low-rank based methods, among of which Low-Rank Representation (LRR) is a typical one. As a refined variant, Latent LRR constructs the dictionary using both observed and hidden data to relieve the insufficient sampling problem. However, they fail to consider the observation that each data point can be represented by only a small subset of atoms in a dictionary. Motivated by this, we present the Sparse Latent Low-rank representation (SLL) method, which explicitly imposes the sparsity constraint on Latent LRR to encourage a sparse representation. In this way, each data point can be represented by only selecting a few points from the same subspace. Its objective function is solved by the linearized Augmented Lagrangian Multiplier method. Favorable experimental results on subspace clustering, salient feature extraction and outlier detection have verified promising performances of our method. (C) 2015 Elsevier Inc. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available