4.6 Article

Tensor low-rank sparse representation for tensor subspace learning

Journal

NEUROCOMPUTING
Volume 440, Issue -, Pages 351-364

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2021.02.002

Keywords

Tensor subspace learning; Low-rank representation; Sparse representation; Tensor l(2,1)-norm

Funding

  1. National Natural Science Foundation of China [61866033]
  2. Natural Science Foundation of Gansu Province, China [18JR3RA369]
  3. Introduction of Talent Research Project of Northwest Minzu University [xbmuyjrc201904]
  4. Fundamental Research Funds for the Central Universities of Northwest Minzu University [31920200064, 31920200097]
  5. Gansu Provincial First-class Discipline Program of Northwest Minzu University [11080305]
  6. Leading Talent of National Ethnic Affairs Commission (NEAC)
  7. Young Talent of NEAC
  8. Innovative Research Team of NEAC [(2018) 98]

Ask authors/readers for more resources

In this paper, a robust tensor low-rank sparse representation (TLRSR) method is proposed for subspace learning on three-dimensional tensors. The method effectively captures the global and local structure of sample data using dual constraints and noise characterization with tensor l(2,1)-norm. The denoised tensor is used as the dictionary for finding low-rank sparse representation, and an iterative update algorithm is proposed for optimization. The method shows good performance in tensor subspace learning, demonstrated through clustering on face images and denoising on real images.
Many subspace learning methods are implemented on a matrix of sample data. For multi-dimensional data, these methods have to convert data samples into vectors in advance, which often destroys the inherent spatial structure of the sample data. In this paper, we propose a robust tensor low-rank sparse representation (TLRSR) method that can directly perform subspace learning on three-dimensional tensors. Firstly, the dual constraints of low-rankness and sparseness make the representation tensor effectively capture the global structure and local structure of sample data, respectively. Secondly, in order to deal with outliers and noise, we adopt the tensor l(2,1)-norm to characterize the noise of tensor composed of multiple samples. Thirdly, the denoised tensor instead of the original tensor is used as the dictionary to find the low-rank sparse representation tensor. Finally, an iterative update algorithm is proposed for the optimization of TLRSR, compared with the state-of-the-art methods, clustering on face images and denoising on real images verify the good performance of our proposed TLRSR in tensor subspace learning. (C) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available