Deepfake Phishing: How AI-Generated Media is Used in Social Engineering (2024)

Brij Gupta



6 min read


May 17, 2024


In recent years, the advent of deepfake technology has introduced a new dimension to phishing attacks, transforming the landscape of social engineering. Deepfake phishing leverages artificial intelligence to create highly realistic and convincing media, such as videos, audio, and images, that can deceive even the most vigilant individuals and organizations. This blog explores the concept of deepfake phishing, differentiates it from traditional phishing, and delves into the sophisticated mechanisms behind these attacks. Through real-world examples, we analyze the psychological and technical factors that make deepfake phishing particularly effective. Furthermore, we provide practical guidance on identifying and protecting against these threats, highlighting the importance of technological solutions, organizational policies, and individual precautions. As deepfake technology continues to evolve, we discuss future trends and the ongoing efforts to combat this emerging cybersecurity challenge.

Traditional forms of phishing typically involve sending deceptive emails or electronic letters that appear to be from legitimate organizations to trick individuals into revealing sensitive information [1][2]. These attacks often rely on social engineering and website spoofing techniques to deceive users [3]. Phishing attacks can take various forms, including emails, messages, and websites, with the goal of obtaining valuable user data [4][5]. While traditional phishing mainly targets emails and websites, newer forms, such as those on Ethereum, utilize diverse sources for spreading fraudulent information [6][7]. Overall, phishing remains a prevalent threat that exploits human vulnerabilities to gain unauthorized access to confidential information [8][9].

Deepfake phishing involves the use of deepfake technology, which leverages artificial intelligence algorithms to create realistic but fake audiovisual content, to deceive individuals into divulging sensitive information [10][11]. Adversaries can exploit deepfakes in phishing attacks by impersonating trusted individuals through synthesized audio or video, enhancing the effectiveness of spear phishing campaigns [12]. This form of attack is expected to increase, targeting both organizations and individuals for financial gain [10][13]. Deepfake phishing poses a significant threat due to its ability to forge identities and manipulate media to deceive targets, highlighting the importance of developing detection mechanisms to combat this emerging form of cyber threat [11][13][18–20].

Deepfake Phishing: How AI-Generated Media is Used in Social Engineering (2)

Attackers can employ video deepfakes during Zoom calls to convincingly pose as trusted individuals and persuade victims to disclose confidential information, such as credentials, or to authorize unauthorized financial transactions. One notable instance involved a scammer in China who used face-swapping technology to impersonate someone and successfully tricked the victim into transferring $622,000. According to Forbes, such incidents highlight the growing danger of video deepfakes in phishing attacks[16].

Deepfake Phishing: How AI-Generated Media is Used in Social Engineering (3)

BEC (company Email Compromise) is a cyberattack strategy where an attacker impersonates an official company email to deceive the target, usually an employee or management, into divulging account details, money, or other confidential information.

Deepfake Phishing: How AI-Generated Media is Used in Social Engineering (4)

1. Deepfakes pose significant ethical challenges, including the potential use for blocakmail, intimidation, and sabotage, as well as broader implications for trust and accountability.

2. They can affect all levels of public and political life, contributing to reputational risks for individuals, growth of organized crime, and social stability and national security concerns.

3. Ethical concerns such as informed consent, privacy protection, traceability, and non-deception have a significantly positive impact on the ethical acceptability and social acceptance of deepfake information.

4. The use of deepfake technology and deepfakes is morally suspect, especially when they use digital data representing the image and/or voice of persons to portray them in ways in which they would be unwilling to be portrayed.

1. The false association cause of action under Section 43(a)(1)(A) of the Lanham Act is well-suited for addressing problems posed by deepfakes, as it is based on a theory of consumer confusion, which is the principal mischief posed by deepfakes

2. Governments, such as the US and China, have enacted new laws that criminalize certain deepfakes, and security policies should be revised to add provisions on how to deal with requests or orders originating from phone calls and voice chats

3. The ethical concerns affecting acceptance behavior identified in studies provide an entry point for the ethical regulation of deepfake information.

Deepfake phishing represents a significant and growing threat in the realm of cybersecurity, leveraging advanced AI-generated media to deceive and manipulate targets. This blog has highlighted the unique mechanisms and impacts of deepfake phishing, showcasing its effectiveness through psychological manipulation and technical sophistication. By understanding the warning signs and verification techniques, individuals and organizations can better protect themselves from these sophisticated attacks. Implementing technological solutions, establishing robust organizational policies, and encouraging individual vigilance are crucial steps in mitigating the risks associated with deepfake phishing. As AI technology continues to advance, it is imperative to stay informed and proactive in addressing this evolving threat. Ongoing research and development efforts are essential in developing new defenses and ensuring a secure digital environment. Through collective awareness and action, we can fortify our defenses against the pernicious threat of deepfake phishing.

1. J. Wu, “Who are the phishers? phishing scam detection on ethereum via network embedding”,, 2019.

2. H. Hu and Y. Xu, “Scsguard: deep scam detection for ethereum smart contracts”,, 2021.

3. M. Aburrous, M. Hossain, K. Dahal, & F. Thabtah, “Experimental case studies for investigating e-banking phishing techniques and attack strategies”, Cognitive Computation, vol. 2, no. 3, p. 242–253, 2010.

4. M. Prabakaran, A. Chandrasekar, & P. Sundaram, “An enhanced deep learning‐based phishing detection mechanism to effectively identify malicious urls using variational autoencoders”, Iet Information Security, vol. 17, no. 3, p. 423–440, 2023.

5. J. Zhang and Y. Wang, “A real-time automatic detection of phishing urls”,, 2012.

6. T. Yu, X. Chen, & Z. Xu, “Mp-gcn: a phishing nodes detection approach via graph convolution network for ethereum”, Applied Sciences, vol. 12, no. 14, p. 7294, 2022.

7. A. Kabla, M. Anbar, S. Manickam, T. Al-Amiedy, P. Cruspe, A. Al-Aniet al., “Applicability of intrusion detection system on ethereum attacks: a comprehensive review”, Ieee Access, vol. 10, p. 71632–71655, 2022.

8. A. Das, S. Baki, A. Aassal, R. Verma, & A. Dunbar, “Sok: a comprehensive reexamination of phishing research from the security perspective”, Ieee Communications Surveys & Tutorials, vol. 22, no. 1, p. 671–708, 2020.

9. M. Khonji, Y. Iraqi, & A. Jones, “Phishing detection: a literature survey”, Ieee Communications Surveys & Tutorials, vol. 15, no. 4, p. 2091–2121, 2013.

10. L. Passos, D. Jodas, K. Costa, L. Júnior, D. Colombo, & J. Papa, “A review of deep learning-based approaches for deepfake content detection”,, 2022.

11. Gaurav, A., et al. (2023, October). Adaptive Defense Mechanisms Against Phishing Threats in 6G Wireless Environments. In 2023 IEEE 98th Vehicular Technology Conference (VTC2023-Fall) (pp. 1–5). IEEE.

12. A. Dixit, N. Kaur, & S. Kingra, “Review of audio deepfake detection techniques: issues and prospects”, Expert Systems, vol. 40, no. 8, 2023.

13. Vajrobol, V., et al. (2024). Mutual information based logistic regression for phishing URL detection. Cyber Security and Applications, 2, 100044.

14. N. Köbis, J. Bonnefon, & I. Rahwan, “Bad machines corrupt good morals”, Nature Human Behaviour, vol. 5, no. 6, p. 679–685, 2021.

15. Y. Mirsky, “Df-captcha: a deepfake captcha for preventing fake calls”,, 2022.

16. “Phishing, trust and human wellbeing”,, 2021.

17. Y. Mirsky, “Df-captcha: a deepfake captcha for preventing fake calls”,, 2022.

18. Forbes, “How Deepfakes Make Phishing Scams More Effective,” accessed May 17, 2024.

19. Michael Kan (2020) Deepfaking a Celebrity on a Zoom Call is Now Possible. Access on 17–05–2024,d

20. Zhao, Y., et al.(2024). Analysis of the impact of social network financing based on deep learning and long short-term memory. Information Systems and e-Business Management, 1–17.

Deepfake Phishing: How AI-Generated Media is Used in Social Engineering (2024)
Top Articles
Latest Posts
Article information

Author: Delena Feil

Last Updated:

Views: 5499

Rating: 4.4 / 5 (65 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Delena Feil

Birthday: 1998-08-29

Address: 747 Lubowitz Run, Sidmouth, HI 90646-5543

Phone: +99513241752844

Job: Design Supervisor

Hobby: Digital arts, Lacemaking, Air sports, Running, Scouting, Shooting, Puzzles

Introduction: My name is Delena Feil, I am a clean, splendid, calm, fancy, jolly, bright, faithful person who loves writing and wants to share my knowledge and understanding with you.