4.6 Article

Head motion synthesis from speech using deep neural networks

Journal

MULTIMEDIA TOOLS AND APPLICATIONS
Volume 74, Issue 22, Pages 9871-9888

Publisher

SPRINGER
DOI: 10.1007/s11042-014-2156-2

Keywords

Head motion synthesis; Deep neural network; Talking avatar; Computer animation

Funding

  1. National Natural Science Foundation of China [61175018]
  2. Fok Ying Tung Education Foundation [131059]

Ask authors/readers for more resources

This paper presents a deep neural network (DNN) approach for head motion synthesis, which can automatically predict head movement of a speaker from his/her speech. Specifically, we realize speech-to-head-motion mapping by learning a DNN from audio-visual broadcast news data. We first show that a generatively pre-trained neural network significantly outperforms a conventional randomly initialized network. We then demonstrate that filter bank (FBank) features outperform mel frequency cepstral coefficients (MFCC) and linear prediction coefficients (LPC) in head motion prediction. Finally, we discover that extra training data from other speakers used in the pre-training stage can improve the head motion prediction performance of a target speaker. Our promising results in speech-to-head-motion prediction can be used in talking avatar animation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available