**Definition.** AI deepfake social engineering fraud is deception that uses AI-generated or AI-manipulated audio, video, images, or text (“synthetic media”) to impersonate trusted people or entities (e.g., CEO/CFO, vendor, customer, regulator, family member) and induce victims to take actions they...
**Definition.** AI deepfake social engineering fraud is deception that uses AI-generated or AI-manipulated audio, video, images, or text (“synthetic media”) to impersonate trusted people or entities (e.g., CEO/CFO, vendor, customer, regulator, family member) and induce victims to take actions they otherwise would not—most commonly authorizing payments, changing bank details, disclosing credentials/secrets, or granting system access. ([CrowdStrike deepfake attack explainer](https://www.crowdstrike.com/en-us/cybersecurity-101/social-engineering/deepfake-attack/), [Fortinet deepfake glossary](https://www.fortinet.com/resources/cyberglossary/deepfake), [Check Point on AI social engineering](https://www.checkpoint.com/cyber-hub/threat-prevention/social-engineering-attacks/ai-social-engineering/)) **Why it’s different from traditional phishing/BEC.** Traditional social engineering relies on email spoofing, urgency, and text-only pretexts; deepfake-enabled social engineering adds high-fidelity “sensory proof” (seeing/hearing a familiar executive) that defeats human skepticism and can bypass voice/face-based authentication workflows. ([Fortinet deepfake glossary](https://www.fortinet.com/resources/cyberglossary/deepfake), [Coalition Security Labs on deepfakes](https://www.coalitioninc.com/blog/security-labs/deepfakes-are-making-cyber-scams-more-difficult-to-detect)) **Threat mechanics (common attack chain).** 1) Recon: attacker gathers OSINT (earnings calls, webinars, podcasts, LinkedIn clips) to capture voice/face samples and organizational context (vendors, projects, finance approvals). ([Check Point on AI social engineering](https://www.checkpoint.com/cyber-hub/threat-prevention/social-engineering-attacks/ai-social-engineering/)) 2) Model prep: voice cloning / face reenactment / avatar creation; LLMs generate targeted scripts and conversational replies. ([CrowdStrike deepfake attack explainer](https://www.crowdstrike.com/en-us/cybersecurity-101/social-engineering/deepfake-attack/)) 3) Delivery: multi-channel “hybrid” approach—phishing email + Teams/Zoom call + WhatsApp/SMS + spoofed caller ID—to create corroboration. ([Coalition Security Labs on deepfakes](https://www.coalitioninc.com/blog/security-labs/deepfakes-are-making-cyber-scams-more-difficult-to-detect)) 4) Action: victim authorizes payment/changes bank details/shares credentials.
5) Monetization: rapid fund movement through mule accounts; or follow-on intrusion using harvested credentials. **Primary loss types.** - Direct financial theft (wire transfers, vendor payment diversion). - Credential compromise → downstream ransomware/data theft. - Reputational harm/stock manipulation via synthetic statements. - Claims/underwriting fraud (deepfake “evidence”). ([Swiss Re SONAR 2025](https://www.swissre.com/institute/research/sonar/sonar2025/how-deepfakes-disinformation-ai-amplify-insurance-fraud.html), [Coalition Deepfake Response Endorsement announcement](https://www.coalitioninc.com/announcements/coalition-adds-deepfake-response-endorsement))
**Why AI agents change the threat model.** Agentic systems (LLM-driven assistants that can message, call, schedule, approve, or execute workflows via tools/APIs) expand deepfake social engineering from “tricking a person once” to “tricking an automated actor repeatedly and at machine speed.” ([Experian 2026 Future of Fraud Forecast press release](https://www.experianplc.com/newsroom/press-releases/2026/experian-s-new-fraud-forecast-warns-agentic-ai--deepfake-job-can)) **Key agent-specific failure modes.** 1) **Tool-triggered payments and vendor changes.** If an agent is allowed to initiate wires, update vendor bank details, or approve invoices via ERP/banking integrations, a deepfaked “executive” request delivered through voice/video can be converted into an authenticated API action by the agent.
The risk concentrates at the boundary between *identity trust* (who is speaking) and *authorization* (who can execute). (Illustrated by how the Arup/HK scam used a “confidential transaction” pretext to trigger transfers.) ([CNN on Hong Kong police briefing](https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk)) 2) **Conversation-based authorization bypass.** Agents that follow natural-language instructions (“Yes, process that urgent payment”) are vulnerable to persuasive “authority/urgency” prompts delivered via trusted channels (Teams/Slack) even without a system breach. ([Coalition Security Labs on deepfakes](https://www.coalitioninc.com/blog/security-labs/deepfakes-are-making-cyber-scams-more-difficult-to-detect)) 3) **Identity proofing & KYC agents.** Agents that perform remote identity verification can be fooled if they rely on a single modality (face video, voice print).
Layered controls (MFA + liveness + behavioral signals) become mandatory. ([Thales deepfake fraud defense strategies](https://cpl.thalesgroup.com/blog/access-management/deepfake-fraud-defense-strategies)) 4) **Helpdesk/IT reset agents.** An agent that can reset passwords, provision access, or grant roles becomes a “privilege escalation oracle.” Deepfake voice/video used against the helpdesk is already highlighted as a tactic; if the helpdesk is automated, the blast radius increases. ([Coalition Security Labs on deepfakes](https://www.coalitioninc.com/blog/security-labs/deepfakes-are-making-cyber-scams-more-difficult-to-detect)) 5) **Model supply-chain and prompt-injection blending.** Attackers can combine deepfake impersonation of a colleague with malicious links/docs that inject instructions into the agent (“read this invoice PDF and follow steps”), turning social engineering into automated tool misuse. **Agent-specific mitigations (design patterns).** - **Explicit policy gates for high-risk tools** (payments, bank detail changes, privilege grants): require multi-party approval and cryptographic step-up auth, not conversational confirmation. - **Out-of-band verification hooks**: agent must challenge/verify via pre-registered secure channels for unusual requests. ([Delinea mitigation guidance](https://delinea.com/blog/how-to-mitigate-ai-powered-social-engineering-attacks)) - **Least-privilege tool scopes** and **transaction anomaly detection** for agent actions. - **Human-in-the-loop** for high-value actions, with independent verification. - **Provenance-aware comms**: signed internal announcements; verified exec channels; reduced reliance on “voice/video authenticity.” ([Tech Policy Press](https://techpolicy.press/what-the-eus-new-ai-code-of-practice-means-for-labeling-deepfakes))
**2019 – UK energy firm (voice deepfake CEO-to-CEO).** Criminals used AI voice cloning to impersonate a German executive and convinced the UK subsidiary CEO to transfer ~€220,000 (often reported as ~$243,000) to a fraudulent supplier account. ([Fortinet deepfake glossary](https://www.fortinet.com/resources/cyberglossary/deepfake), [Gallagher deepfake social engineering
article](https://www.ajg.com/news-and-insights/deep-fake-technology-the-frightening-evolution-of-social-engineering/)) **2020 – UAE bank manager voice-cloning heist (~$35M).** Public court records tied to an international recovery effort describe a “vishing”/voice-cloning scheme resulting in ~$35 million transferred from a UAE bank. ([Red Goat Cyber Security write-up referencing court documents](https://red-goat.com/voice-cloning-heist/)) **Feb 2024 – Hong
Kong ‘multinational firm’ deepfake video conference (~HK$200M / ~$25.6M).** Hong Kong police described a case where a finance employee joined a video call with multiple deepfaked “colleagues,” including the CFO, and transferred ~HK$200M (~$25.6M) through 15 transactions. ([CNN report on Hong Kong police
briefing](https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk)) **Jan 2024 (disclosed May 2024) – Arup confirms victim of deepfake scam (~$25M).** Arup confirmed it was the firm in the Hong Kong case and that counterfeit voices/images were used; funds were sent by an employee in Hong Kong. ([CNN on Arup
confirmation](https://www.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk)) **2024 – WPP CEO deepfake attempt (no loss).** Scammers created a fake WhatsApp account and arranged a Microsoft Teams meeting using cloned voice/video of CEO Mark Read; WPP reported no money/info was lost. ([ISTARI case note on WPP incident](https://istari-global.com/insights/spotlight/ceo-deepfake-wpp/)) **Observed pattern across
incidents.** The largest losses combine (a) pretexting around confidential transactions and (b) “social proof” via video calls with multiple synthetic participants, which defeats standard “call your boss” instincts because the victim believes they already did. ([CNN on Arup confirmation](https://www.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk), [CNN on Hong Kong
**Attack prevalence / frequency.** - 62% of organizations reported experiencing a deepfake attack involving social engineering or automated process exploitation (Gartner survey as reported by KnowBe4). ([KnowBe4 blog citing 2025 Gartner survey](https://blog.knowbe4.com/unmasking-the-deepfake-threat-a-game-changer-for-reducing-human-risk)) **Contact center / voice channel growth.** - Pindrop reports a 680% year-over-year rise in
deepfake activity and a 475% increase in synthetic voice fraud in insurance, based on analysis of >1.2B calls. ([Pindrop deepfake fraud forecast](https://www.pindrop.com/article/deepfake-fraud-could-surge/)) **Macro cost estimates (use cautiously).** - IBM cites an “expected global cost of deepfake fraud in 2024” of $1 trillion and notes the average
cost of creating a deepfake can be very low (e.g., $1.33 cited). (These figures are directional/forecast-style rather than audited loss totals.) ([IBM on deepfake cybercrime](https://www.ibm.com/think/insights/new-wave-deepfake-cybercrime)) **Loss totals from incident datasets.** - Surfshark (using AI Incident Database + Resemble AI dataset) estimates deepfake-related losses exceeded $1.56B as
of 2025, with >$1B occurring in 2025 alone. ([Surfshark deepfake losses chart](https://surfshark.com/research/chart/ai-deepfake-losses)) **Enterprise cyber/insurance market signal.** - NAIC’s 2025 Cybersecurity Insurance Market report flags “vishing and smishing” surges and cites the Arup ~$25.6M deepfake-enabled fraud example as evidence identity trust can be weaponized. ([NAIC 2025 Cybersecurity
Deepfake social-engineering *fraud* often leads to criminal investigations and recovery actions rather than landmark published case law (many matters settle, remain confidential, or involve cross-border enforcement).
Still, several legal precedents illustrate how courts are already handling deepfakes in adjacent contexts relevant to this risk category: **1) Deepfakes submitted as evidence (court integrity / “deepfake defense”).** - *Mendones v.
Cushman & Wakefield* (Alameda County, CA; reported 2026): the National Center for State Courts describes a judge identifying submitted witness video testimony as AI-generated and notes this as one of the first detected instances of a deepfake presented as purportedly authentic evidence. ([National Center for State Courts](https://www.ncsc.org/resources-courts/ai-generated-evidence-threat-public-trust-courts)) **2) Courts confronting deepfake allegations about authentic video (“deepfake defense” behavior).** - *Huang v.
Tesla* (discovery/authentication dispute; described in legal commentary): the Berkeley Technology Law Journal summarizes a court reproaching refusal to admit authenticity based solely on speculative deepfake risk, highlighting emerging standards for evidence authenticity. ([Berkeley Technology Law Journal blog](https://btlj.org/2025/06/deepfaked-evidence-what-case-law-tells-us-about-how-the-rules-of-authenticity-needs-to-change/)) **3) Deepfake harassment/hostile environment precedents (liability theories transferable to business contexts).** - Littler reports a California appellate court affirming a ~$4M verdict involving circulation of an AI-generated sexually explicit image resembling a police captain, treating dissemination of fabricated content as unlawful harassment under California law. ([Littler ASAP](https://www.littler.com/news-analysis/asap/deepfakes-workplace-emerging-legal-risks-ai-driven-harassment)) **Practical implication for insurers/insureds.** Deepfakes create dual legal exposure: (a) fraud/theft claims (crime coverage disputes) and (b) liability from distribution/hosting/operational failures (E&O, media liability, employment practices), plus evidentiary disputes where deepfakes can both forge proof and undermine legitimate recordings. ([National Center for State Courts](https://www.ncsc.org/resources-courts/ai-generated-evidence-threat-public-trust-courts), [Swiss Re SONAR 2025](https://www.swissre.com/institute/research/sonar/sonar2025/how-deepfakes-disinformation-ai-amplify-insurance-fraud.html))
**European Union – AI Act (Regulation (EU) 2024/1689): transparency duties for deepfakes.** - Article 50 creates transparency obligations for certain AI systems and explicitly addresses disclosure/labeling for AI-generated or AI-manipulated “deepfake” content, requiring deployers to inform that content is artificially generated/manipulated (with limited exceptions such as obvious satire/fiction). ([Tech Policy Press explanation of AI Act Article 50](https://techpolicy.press/what-the-eus-new-ai-code-of-practice-means-for-labeling-deepfakes)) **EU – Digital Services Act (DSA) interplay.** Platforms’
transparency and content moderation duties under DSA can apply when deepfakes constitute illegal content or disinformation; Tech Policy Press notes the EU approach is layered across AI Act + DSA + GDPR. ([Tech Policy Press](https://techpolicy.press/what-the-eus-new-ai-code-of-practice-means-for-labeling-deepfakes)) **United States – State laws targeting deepfakes (fragmented; often elections/intimate imagery, but increasingly fraud/impersonation).** - NCSL tracks 2024 state legislation on “deceptive audio or visual media” (commonly election-related restrictions/disclosures). ([NCSL
deepfake legislation overview](https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation)) - Example trend: states creating crimes/civil causes for forged digital likeness used to defraud/threaten/harass (e.g., WA forged digital likeness; NH fraudulent use). (Summaries compiled by a security law overview site—verify against statutes for underwriting/compliance use.) ([HALOCK Security Labs overview](https://www.halock.com/what-legislation-protects-against-deepfakes-and-synthetic-media/)) **United States – Insurance regulatory perspective (NAIC).** - NAIC’s 2025 Cybersecurity Insurance Market report flags deepfake-enabled fraud (e.g., Arup) as a market-relevant cyber
risk and highlights vishing/smishing surges, informing model guidance and supervisory attention even if NAIC does not “regulate deepfakes” directly. ([NAIC 2025 Cybersecurity Insurance Report PDF](https://content.naic.org/sites/default/files/inline-files/2025_Cybersecurity_Insurance%20Report.pdf)) **Standards (quasi-regulatory) influencing controls and underwriting.** - NIST work on synthetic content risk reduction and digital identity (referenced in industry compliance commentary) is increasingly used as a benchmark for “reasonable security” in disputes, procurement, and underwriting. ([Netarx overview of NIST
**Where coverage typically sits.** Deepfake social engineering losses span multiple insurance lines depending on the harm: 1) **Crime / Fidelity (primary for fraudulent transfer).** - **Social Engineering Fraud** / **Funds Transfer Fraud** endorsements/riders are usually needed to cover voluntary wire transfers induced by impersonation (including deepfake voice/video).
Many cyber policies exclude these losses via “voluntary parting” style exclusions unless specifically endorsed. (Market structure discussed in industry analysis; treat as general guidance and validate policy wordings.) ([CyberScoop on Coalition coverage context](https://cyberscoop.com/url-coalition-cybersecurity-insurance-coverage-deepfakes-reputational-harm/)) 2) **Cyber insurance (incident response + some fraud depending on endorsements).** - Cyber policies may cover costs tied to phishing/social engineering that leads to network intrusion (forensics, breach response), and may offer limited social engineering sublimits; however, pure “authorized transfer” fraud often needs crime coverage or explicit endorsement. ([NAIC 2025 Cybersecurity Insurance Report PDF](https://content.naic.org/sites/default/files/inline-files/2025_Cybersecurity_Insurance%20Report.pdf)) 3) **Reputational harm / media response to synthetic impersonation.** - Coalition launched a **Deepfake Response Endorsement** to its Cyber Insurance policies globally, providing access to deepfake forensics, legal takedown work, and related support for deepfake events (positioned around damaging impersonations/reputation). ([Coalition announcement](https://www.coalitioninc.com/announcements/coalition-adds-deepfake-response-endorsement)) 4) **Professional liability / D&O / EPLI (secondary liability).** - If deepfakes cause misstatements, stock impacts, employment-related harms, or professional service errors, coverage may implicate D&O, EPLI, E&O, or media liability depending on the fact pattern. **Providers/products explicitly referencing deepfakes (example).** - Coalition: Deepfake Response Endorsement (cyber). ([Coalition announcement](https://www.coalitioninc.com/announcements/coalition-adds-deepfake-response-endorsement)) **Underwriting/coverage caveats.** Deepfake events frequently trigger disputes over: (a) whether there was a “computer system breach,” (b) whether funds were “voluntarily transferred,” and (c) whether verification procedures were followed—so policy definitions, conditions, and sublimits are critical. ([CyberScoop on Coalition coverage context](https://cyberscoop.com/url-coalition-cybersecurity-insurance-coverage-deepfakes-reputational-harm/))
**Governance & process controls (highest ROI).** - **Out-of-band verification for high-risk requests** (wires, vendor bank changes, credential resets): verify using a known-good channel (call-back to directory number; separate approval workflow), not contact info contained in the request. ([Delinea mitigation guidance](https://delinea.com/blog/how-to-mitigate-ai-powered-social-engineering-attacks), [UCLA Health tips](https://it.uclahealth.org/protect-yourself-from-deepfakes)) - **Dual control / multi-approver payments** and **delayed transfer windows** for large or unusual transactions; segregate duties between request, approval, and release. ([Arctic Wolf guidance](https://arcticwolf.com/resources/blog/how-to-combat-ai-enhanced-social-engineering-attacks/)) **Identity and access hardening.** - Phishing-resistant MFA/passkeys, least privilege, and just-in-time privileged access; enforce approvals for privilege elevation. ([Delinea mitigation guidance](https://delinea.com/blog/how-to-mitigate-ai-powered-social-engineering-attacks), [Arctic Wolf guidance](https://arcticwolf.com/resources/blog/how-to-combat-ai-enhanced-social-engineering-attacks/)) **Deepfake-aware training and exercises.** - Update security awareness to include voice/video deepfakes and “hybrid” pretexts; run simulations that include realistic calls/video meeting invites. ([Delinea mitigation guidance](https://delinea.com/blog/how-to-mitigate-ai-powered-social-engineering-attacks)) **Technical detection & authentication.** - For onboarding/identity verification: combine MFA with liveness detection and behavioral biometrics; monitor anomalies. ([Thales deepfake fraud defense strategies](https://cpl.thalesgroup.com/blog/access-management/deepfake-fraud-defense-strategies)) - Implement media provenance and authenticated internal comms where feasible (digitally signed memos, verified exec channels) to reduce reliance on “it looked/sounded right.” ([Delinea mitigation guidance](https://delinea.com/blog/how-to-mitigate-ai-powered-social-engineering-attacks)) **Response playbooks.** - Create a “deepfake event” runbook: evidence preservation, forensics vendor, platform takedown requests, crisis comms, bank recall procedures, and law enforcement contact.
Coalition’s endorsement description provides a concrete example of response services to pre-plan. ([Coalition announcement](https://www.coalitioninc.com/announcements/coalition-adds-deepfake-response-endorsement))
- Arup CIO Rob Greig described the Arup incident as “technology-enhanced social engineering” and emphasized it “wasn’t even a cyberattack in the purest sense” because “none of our systems were compromised.” ([World Economic
Forum](https://www.weforum.org/stories/2025/02/deepfake-ai-cybercrime-arup/)) - Coalition incident response lead Shelley Ma said: “In the handful of cases where we have spotted deepfakes, we’ve seen attackers mostly use AI-generated voice or text to impersonate trusted contacts.” ([CyberScoop](https://cyberscoop.com/url-coalition-cybersecurity-insurance-coverage-deepfakes-reputational-harm/))
- David Maimon (SentiLink fraud insights) told WIRED: “We are witnessing a significant rise in the number of deepfakes… It was minimal back then… Now, we are encountering hundreds of these cases monthly.”
**1) Real-time, interactive multi-person deepfake calls become routine.** Multiple sources project a shift from static clips to interactive avatars that respond in live calls—exactly the pattern seen in the Arup/HK case but at higher volume. ([The Conversation](https://theconversation.com/deepfakes-leveled-up-in-2025-heres-whats-coming-next-271391), [CNN on Arup confirmation](https://www.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk)) **2) Industrialization of “vishing at scale”
via agentic tooling.** As deepfake creation and call automation costs drop, attackers can profitably target smaller firms and individuals (not just “whales”), and voice channels (contact centers, help desks) become a dominant battleground. ([Pindrop deepfake fraud forecast](https://www.pindrop.com/article/deepfake-fraud-could-surge/)) **3) Fraud shifts from human-to-human to machine-to-machine.** Experian’s 2026 forecast
highlights “machine-to-machine mayhem” where criminals exploit agentic AI interactions and liability becomes unclear—raising insurance questions about attribution, intent, and authorization. ([Experian 2026 Future of Fraud Forecast press release](https://www.experianplc.com/newsroom/press-releases/2026/experian-s-new-fraud-forecast-warns-agentic-ai--deepfake-job-can)) **4) Underwriting/claims pressure increases across lines.** Swiss Re warns deepfakes may increasingly be used in sophisticated cyberattacks and drive
cyber insurance losses, while also boosting demand for anti-fraud solutions. ([Swiss Re SONAR 2025](https://www.swissre.com/institute/research/sonar/sonar2025/how-deepfakes-disinformation-ai-amplify-insurance-fraud.html)) **5) Regulatory emphasis on transparency/provenance.** EU AI Act transparency/labeling obligations for synthetic content push enterprises toward provenance tooling and disclosure workflows, which will likely become “table stakes” for compliance and reputational defense. ([Tech