Journal
ACM COMPUTING SURVEYS
Volume 54, Issue 1, Pages -Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3425780
Keywords
Deepfake; deep fake; reenactment; replacement; face swap; generative AI; social engineering; impersonation
Categories
Ask authors/readers for more resources
Generative deep learning algorithms have advanced to a stage where distinguishing between real and fake has become increasingly challenging. The unethical and malicious applications of this technology, such as the creation of deepfakes for spreading misinformation and impersonating individuals, have raised concerns. This article delves into the creation, detection, current trends, shortcomings in defense solutions, and areas requiring further research and attention in the realm of deepfakes.
Generative deep learning algorithms have progressed to a point where it is difficult to tell the difference between what is real and what is fake. In 2018, it was discovered how easy it is to use this technology for unethical and malicious applications, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals. Since then, these deepfakes have advanced significantly. In this article, we explore the creation and detection of deepfakes and provide an in-depth view as to how these architectures work. The purpose of this survey is to provide the reader with a deeper understanding of (1) how deepfakes are created and detected, (2) the current trends and advancements in this domain, (3) the shortcomings of the current defense solutions, and (4) the areas that require further research and attention.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available