4.6 Article

Fine-grained talking face generation with video reinterpretation

期刊

VISUAL COMPUTER
卷 37, 期 1, 页码 95-105

出版社

SPRINGER
DOI: 10.1007/s00371-020-01982-7

关键词

Talking face; Video generation; Multi-purpose discriminators

向作者/读者索取更多资源

A method for generating talking face videos from audio clips and face images has been proposed, achieving optimization in terms of audio-video synchronization and image fidelity.
Generating a talking face video from a given audio clip and an arbitrary face image has many applications in areas such as special visual effects and human-computer interactions. This is a challenging task, as it requires disentangling semantic information from both input audio clips and face image, then synthesizing novel animated facial image sequences from the combined semantic features. The desired output video should maintain both video realism and audio-lip motion consistency. To achieve these two objectives, we propose a coarse-to-fine tree-like architecture for synthesizing realistic talking face frames directly from audio clips. This is followed by a video-to-word regeneration module to translate the synthesized talking videos back to the words space, which is enforced to align with the input audios. With multi-level facial landmark attentions, the proposed audio-to-video-to-words framework can generate fine-grained talking face videos that are not only synchronous with the input audios but also maintain visual details from the input face images. Multi-purpose discriminators are also adopted for adversarial learning to further improve both image fidelity and semantic consistency. Extensive experiments on GRID and LRW datasets demonstrate the advantages of our framework over previous methods in terms of video quality and audio-video synchronization.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据