4.6 Article

Mutual variation of information on transfer-CNN for face recognition with degraded probe samples

Journal

NEUROCOMPUTING
Volume 310, Issue -, Pages 299-315

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2018.05.038

Keywords

Face recognition; Transfer-CNN; Mutual variation of information; Domain adaptation; Deep-DA; TM; KLD; SGD; 3-MET

Ask authors/readers for more resources

Learning based on convolutional neural networks (CNNs) or deep learning has been a major research area with applications in face recognition (FR). Under degraded conditions, performance of FR algorithms severely degrade. The work presented in this paper has three contributions. First, it proposes a transfer-CNN architecture of deep learning tailor-made for domain adaptation (DA), to overcome the difference in feature distributions between the gallery and probe samples. The proposed architecture consists of three units: base convolution (BCM), transfer (TM) and linear (LM) modules. Secondly, a novel 3-stage algorithm for Mutually Exclusive Training (3-MET) based on stochastic gradient descent, has been proposed. The initial stage of 3-MET involves updating the parameters of the BCM and LM units using samples from gallery. The second stage involves updating the parameters of TM, to bridge the disparity between the source and target distributions, based on mutual variation of information (MVI). The final stage of training in 3-MET freezes the layers of the BCM and TM, for updating (fine-tuning) only the parameters of the LM using a few probe (as target) samples. This helps the proposed transfer-CNN to provide enhanced domain-invariant representation for efficient deep-DA learning and classification. The third contribution comes from rigorous experimentations performed on three benchmark real-world degraded face datasets captured using surveillance cameras, one real-world dataset with non-uniform motion blur and three synthetically degraded benchmark face datasets. This exhibits superior performance of the proposed transfer-CNN architecture with 3-MET training, using Rank-1 recognition rates and ROC and CMC metrics, over many recent state-of-the-art techniques of CNN and DA. Experiments also include performance analysis under unbiased training with two large-scale chimeric face datasets. (C) 2018 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available