Generative AI Is Accelerating Social Engineering
- July 31, 2025
- Clayton Rice, K.C.
Social engineering relies on trickery and persuasion rather than technologically skillful cyber attacks. Threat actors who use social engineering tactics don’t exploit systems. They exploit trust. They hack people. Deploying tactics of psychological manipulation targeting human vulnerability threat actors continue to experiment with artificially intelligent tools to enhance realism and accelerate conventional social engineering schemes.
1. Introduction
Although generative AI has not overwhelmed the tactics of threat actors, two recent reports indicate that artificial intelligence is accelerating the scale and realism of social engineering campaigns. In Palo Alto Networks’ 2025 Unit 42 Global Incident Response Report, it was emphasized that more than one-third (36%) of the incident responses reported began with a social engineering tactic. (here) The more recently published 2025 Global Threat Report by CrowdStrike found that voice phishing (vishing) experienced explosive growth between the first and second half of 2024 – up 442%. (here) The reports follow a post earlier this year to the Hogan Lovells website titled Confronting social engineering in the age of artificial intelligence that emphasized the ease with which AI mines open source data about individuals and constructs convincing campaigns. (here)
2. Background
Social engineering, alternatively called “human hacking”, is the leading cause of network compromise according to the State of Cybersecurity 2022 Report published by the Information Systems Audit and Control Association (ISACA). More recently, IBM’s Cost of a Data Breach Report 2025 found that breaches caused by social engineering tactics are among the most costly. (here and here) Social engineering encompasses an evolving range of tactics deployed by cyberattackers to trick targets in order to gain access to an organization’s systems and data. It is now considered “the top intrusion point globally, attracting an array of financially motivated and nation-state backed threat groups.” (here) In a piece by Catherine Reed for Firewall Times reviewing hacking statistics for 2023, one source estimated that 98% of cyber attacks relied on some form of social engineering. (here)
3. An Old School Grifter
In a post to Cisco Systems Inc.’s website titled What Is Social Engineering? the aim is described as “gaining the trust of targets” so they lower their guard. They are then encouraged to take unsafe actions like divulging personal information or clicking on infected web links or email attachments. It has more to do with the “psychology of persuasion” than a cyber attack – targeting the mind like an old school grifter. (here) A post to the IBM website with the same title emphasized that social engineering is attractive to malicious actors because it enables them to gain access to networks, devices and accounts without having to to do the difficult technical work of getting around cybersecurity controls. (here) Social engineering attacks are difficult to prevent because they rely on human vulnerability rather than technological prowess. In a large organization it only takes one employee’s mistake to compromise an entire network.
Most social engineering attacks employ one or more of the following tactics discussed in the IBM post: (a) posing as a brand that a target knows, trusts and may do business with; (b) posing as a government agency that people trust or respect; (c) inducing fear or a sense of urgency as people tend to act rashly when scared or hurried; (d) appealing to greed such as offering a financial reward in exchange for banking information; and, (e) appealing to helpfulness or curiosity such as a message from a social networking site to participate in a survey. Here are some examples of social engineering schemes that I have taken from a post to the Canadian Centre for Cyber Security (CCCS) website:
- Phishing is a tactic threat actors use where a message that appears to be from a trusted source is sent to a large number of recipients. The message may ask recipients to provide sensitive information, complete an action (like change a personal or network password) or click on a link which looks legitimate but is actually malicious.
- Spear phishing is a phishing attack sent to a select group of individuals or a single person which includes details that are tailored to be more convincing and make the source appear more legitimate.
- Quishing occurs when a phishing attack includes a quick response (QR) code that takes the target to a malicious website when scanned.
- Baiting is an attack that occurs when a threat actor convinces the target to take an action (like clicking on a malicious link) by promising something appealing like a prize.
- Quid pro quo is an attack where a threat actor convinces a target to give up sensitive information in exchange for the promise of a service in return.
- Honey traps occur where a threat actor engages a target in a fake romantic relationship online in order to get money or sensitive information. (here)
Other types of phishing include: vishing conducted through voice mail; smishing (or SMS phishing) through text message; search engine phishing where hackers create malicious websites ranking high in search results for popular search terms; and, angler phishing that uses a fake social media account masquerading as an official account of a trusted company’s customer service team. Threat actors will usually follow a pattern when executing social engineering attacks. There are four phases of a typical attack as described by CCCS.
- The Bait: Threat actors research an organization and employees and then target them with an attack that appears to come from a trusted source. Information posted on social media sites, like Facebook, TikTok, Linkedin or Instagram can be leveraged by threat actors to enhance the ruse that they know their target. The knowledge they have makes them seem more trustworthy and authentic.
- The Hook: Using social connection, sympathy, imposed urgency, threats, or a disarming tone a threat actor hooks the target into their scheme. Users believe the scenario or request presented is real and that the threat actor is authentic.
- The Attack: Users are tricked into giving up sensitive information about themselves or an organization by clicking on a malicious code, changing passwords that give a threat actor access to accounts and networks, or opening a malicious attachment. This provides the threat actor with the key to unlock and steal the target’s information.
- The Escape: Once a threat actor has convinced the user to complete a task, or has the information they want, they will disappear. They may also use scare tactics to silence the victim.
Social engineering, however, is undergoing significant change with the integration of artificial intelligence as malicious actors leverage new tools to exploit old vulnerabilities. What, then, are the trends in the adaptation of generative AI to social engineering campaigns?
4. Adapting to Artificial Intelligence
In an article titled Artificial Intelligence: The Evolution of Social Engineering posted to the General Social Engineer Blog on February 20, 2024, Josten Peña drew attention to the growing prevalence of automation, AI-assisted attacks and the unique threats posed by the emergence of deepfake technology. (here) The Unit 42 report found that social engineering is evolving into “one of the most reliable, scalable and impactful intrusion methods in 2025”. Artificial intelligence is accelerating both the scale and realism of social engineering campaigns. “Threat actors are now using generative AI to craft personalized lures, clone executive voices in callback scams and maintain live engagement during impersonation campaigns,” according to the Unit 42 report.
The CrowdStrike report found that, although malicious use of AI is growing, “it remains largely iterative and evolutionary at this point in time.” Nonetheless, GenAI played a “pivotal role” in sophisticated cyberattack campaigns in 2024. It enabled FAMOUS CHOLLIMA to create highly convincing fake IT job candidates that infiltrated victim organizations, and it helped China-, Russia-, and Iran-affiliated threat actors conduct AI-driven disinformation and influence operations to disrupt elections. Vishing, again, figured prominently. Several eCrime adversaries incorporated vishing into their intrusions in 2024, amounting to a 40% compounded growth rate in observed vishing operations for the year. Although GenAI is still relatively novel, CrowdStrike “identified several examples of its use and anticipates it will be employed in 2025 adversary operations.”
5. Conclusion
Artificial intelligence is being leveraged by threat actors to weaponize methods that are hard to detect due to their realism, data about targets scraped from the internet and enhanced capability to communicate in multiple languages. These tools are trained on vast quantities of open source data that provide detailed accounts of individuals’ lives and can be repurposed for the creation of convincing deepfakes and voice cloning. Although this tells us nothing new about human gullibility, it is the vastness of scale and the plausibility of illusion that divests trust from the truth. In an article published by Brookings titled Artificial intelligence, deepfakes, and the uncertain future of truth, John Villasenor reminded us all that deepfakes “can scramble our understanding of truth [b]y exploiting our inclination to trust the reliability of evidence”. (here)