4.7 Article

Unconstrained Facial Action Unit Detection via Latent Feature Domain

Journal

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
Volume 13, Issue 2, Pages 1111-1126

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2021.3091331

Keywords

Gold; Feature extraction; Annotations; Training; Games; Face recognition; Correlation; Unconstrained facial AU detection; domain adaptation; landmark adversarial loss; feature disentanglement

Funding

  1. National Key R&D Program of China [2019YFC1521104]
  2. National Natural Science Foundation of China [61972157]
  3. Natural Science Foundation of Jiangsu Province [BK20201346]
  4. Six Talent Peaks Project in Jiangsu Province [2015-DZXX-010]
  5. Monash FIT Start-up Grant
  6. Fundamental Research Funds for the Central Universities [2021QN1072]
  7. Zhejiang Lab [2020NB0AB01]
  8. Data Science & Artificial Intelligence Research Centre@NTU (DSAIR)

Ask authors/readers for more resources

In this paper, an end-to-end unconstrained facial action unit (AU) detection framework based on domain adaptation is proposed. The framework transfers accurate AU labels from a constrained source domain to an unconstrained target domain using AU-related facial landmarks. Experimental results demonstrate that the method outperforms other approaches on challenging in-the-wild benchmarks.
Facial action unit (AU) detection in the wild is a challenging problem, due to the unconstrained variability in facial appearances and the lack of accurate annotations. Most existing methods depend on either impractical labor-intensive labeling or inaccurate pseudo labels. In this paper, we propose an end-to-end unconstrained facial AU detection framework based on domain adaptation, which transfers accurate AU labels from a constrained source domain to an unconstrained target domain by exploiting labels of AU-related facial landmarks. Specifically, we map a source image with label and a target image without label into a latent feature domain by combining source landmark-related feature with target landmark-free feature. Due to the combination of source AU-related information and target AU-free information, the latent feature domain with transferred source label can be learned by maximizing the target-domain AU detection performance. Moreover, we introduce a novel landmark adversarial loss to disentangle the landmark-free feature from the landmark-related feature by treating the adversarial learning as a multi-player minimax game. Our framework can also be naturally extended for use with target-domain pseudo AU labels. Extensive experiments show that our method soundly outperforms lower-bounds and upper-bounds of the basic model, as well as state-of-the-art approaches on the challenging in-the-wild benchmarks. The code is available at https://github.com/ZhiwenShao/ADLD.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available