Article
Computer Science, Software Engineering
Prashanth Chandran, Loic Ciccone, Markus Gross, Derek Bradley
Summary: Generating realistic facial animation for CG characters and digital doubles is a challenging task. We propose a new method for high-fidelity offline facial performance retargeting that is cost-effective and artifact-free. Our method performs well in human-to-human 3D facial performance retargeting and achieves comparable quality to blendshape-based techniques while requiring fewer input shapes during setup.
ACM TRANSACTIONS ON GRAPHICS
(2022)
Article
Multidisciplinary Sciences
Anupama K. K. Ingale, A. Anny Leema, HyungSeok Kim, J. Divya Udayan
Summary: In this paper, we propose a framework for automatic facial landmark detection and blendshape generation through expression transfer. The facial landmarks are extracted based on geometric information, and deformation transfer is performed using the extracted landmarks and the estimated correspondence between source and target facial models. Experimental results demonstrate that our method achieves expression transfer with automatic landmarks and comparable smoothness of deformation to state-of-the-art methods. Our proposed method for automatic facial landmark detection based on geometric information of 3D face model is faster and simpler compared to the state-of-the-art automatic deformation transfer method on facial models.
ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING
(2023)
Article
Computer Science, Software Engineering
Longwen Zhang, Chuxiao Zeng, Qixuan Zhang, Hongyang Lin, Ruixiang Cao, Wei Yang, Lan Xu, Jingyi Yu
Summary: This paper presents a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets; modeling facial expressions, geometry, and physically-based textures using separate VAEs to preserve characteristics across respective attributes; comprehensive experiments show that this technique provides higher accuracy and visual fidelity in facial reconstruction and animation.
ACM TRANSACTIONS ON GRAPHICS
(2022)
Article
Computer Science, Software Engineering
Monica Villanueva Aylagas, Hector Anadon Leon, Mattias Teye, Konrad Tollmar
Summary: Voice2Face is a deep learning model that generates face and tongue animations directly from recorded speech, with advantages different from previous works. Through user studies and quantitative evaluations, the superiority of Voice2Face in animation quality and accurate lip closure effects as well as good performance in data quality are demonstrated.
COMPUTER GRAPHICS FORUM
(2022)
Article
Computer Science, Software Engineering
Nannan Wu, Qianwen Chao, Yanzhen Chen, Weiwei Xu, Chen Liu, Dinesh Manocha, Wenxin Sun, Yi Han, Xinran Yao, Xiaogang Jin
Summary: The CPU-based real-time cloth animation method presented in this study formulates cloth deformation as a high-dimensional function of body shape and pose parameters, which is divided into two components for efficient computation. By sampling, clustering, and employing sensitivity-based distance measurement, anchoring points are efficiently calculated for synthesizing clothing deformation. The method can animate clothing represented with thousands of vertices at 50+ FPS on a CPU and improves user perception of dressed virtual agents in immersive virtual environments compared to other methods.
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
(2021)
Article
Computer Science, Software Engineering
Lucio Moser, Chinyu Chien, Mark Williams, Jose Serra, Darren Hendler, Doug Roble
Summary: The algorithm proposed in this study achieves automatic transfer of facial expressions between videos and 3D characters, as well as between different 3D characters. By learning common latent representations and establishing mapping relationships between images, expressions can be remapped between different characters unaffected by physiological differences. This technique can be applied to markerless motion capture and automatic facial animation transfer.
ACM TRANSACTIONS ON GRAPHICS
(2021)
Article
Computer Science, Cybernetics
Alberto Cannavo, Emanuele Stellini, Congyi Zhang, Fabrizio Lamberti
Summary: Creating facial animations using 3D computer graphics is a laborious task. The use of blendshapes is common, but has drawbacks such as the need to memorize mappings and limitations in expressiveness. This article proposes a virtual reality-based interface that uses sketches for direct manipulation of blendshapes, addressing these issues.
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION
(2023)
Article
Green & Sustainable Science & Technology
Kun Liu, Kang-Ming Chang, Ying-Ju Liu
Summary: There are many facial differences between different American animated characters, while Japanese animated characters tend to be similar in design. The subject matter of animation is primarily based on the culture of the people who make it, and the designers also have their own sense of national belonging. The study found that American animators prefer to design a diverse cast of characters based on proportions of their own faces, which may be related to the diverse ethnic structure of the United States. On the other hand, the 'formulaic' style of Japanese animated characters can lead to aesthetic fatigue.
Article
Multidisciplinary Sciences
Jeong-Ha Park, Chae-Yun Lim, Hyuk-Yoon Kwon
Summary: Recent advances in AI technology have greatly improved facial image manipulation, also known as Deepfake. This study focuses on expression swap, which effectively manipulates facial expressions in images and videos without replacing the entire face. The researchers propose an evaluation framework for expression swap models in real-time online class environments, with three defined scenarios. Through quantitative and qualitative evaluation, they compare the performance of two selected models and observe their distinguishing properties. They also devise an architecture for applying the expression swap model to widely used online meeting platforms, showcasing its feasibility for real-time online classes.
SCIENTIFIC REPORTS
(2023)
Article
Computer Science, Software Engineering
Jingwang Ling, Zhibo Wang, Ming Lu, Quan Wang, Chen Qian, Feng Xu
Summary: In this article, a method to learn a Semantically Disentangled Variational Autoencoder (SDVAE) for parameterizing facial details and supporting independent manipulation is proposed. By utilizing the non-linear capability of Deep Neural Networks, the method achieves better accuracy and greater representation power compared with linear models.
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
(2023)
Article
Engineering, Biomedical
Amira Gaber, Mona F. Taher, Manal Abdel Wahed, Nevin Mohieldin Shalaby, Sarah Gaber
Summary: This study developed a facial paralysis assessment and classification system using the Kinect sensor and artificial intelligence methods. Results showed that the system is robust, stable, and performs better than existing grading scales.
BIOMEDICAL ENGINEERING ONLINE
(2022)
Article
Computer Science, Artificial Intelligence
Rizwan Sadiq, Engin Erzin
Summary: This article improves affective facial animations through domain adaptation and data augmentation. The proposed models show significant MSE loss improvements in experiments, and the resulting facial animations are preferred by subjects in subjective evaluations.
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
(2022)
Article
Multidisciplinary Sciences
Thomas Treal, Philip L. Jackson, Jean Jeuvrey, Nicolas Vignais, Aurore Meugnot
Summary: Virtual reality platforms are increasingly used as research tools in social and affective neuroscience to capture emotion communication dynamics. This study found that naturalistic idle motion enhanced the perception of pain expressed by a virtual character, leading to a greater empathic response from participants.
SCIENTIFIC REPORTS
(2021)
Article
Computer Science, Software Engineering
Prashanth Chandran, Gaspard Zoss, Markus Gross, Paulo Gotardo, Derek Bradley
Summary: This article proposes a 3D+time framework for modeling dynamic sequences of 3D facial shapes, which can represent realistic non-rigid motion during a performance. By extending neural 3D morphable models and utilizing a transformer architecture, the authors develop a novel transformer-based autoencoder that can model and synthesize 3D geometry sequences of any length. In addition, the method disentangles the constant facial identity and time-varying facial expressions, allowing for representation of identity-agnostic performances and potential applications in performance synthesis, retargeting, interpolation, completion, denoising, retiming, and 3D body modeling.
COMPUTER GRAPHICS FORUM
(2022)
Article
Computer Science, Software Engineering
Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon, Yaser Sheikh
Summary: The method uses personalized face model and novel illumination model to achieve precise real-time facial tracking in any environment. Through two steps, it accurately captures subtle facial movements and demonstrates strong adaptability in real-world environments.
ACM TRANSACTIONS ON GRAPHICS
(2021)