Blog / Industry
Deepfake Fraud Is a $40B Problem: How Companies Are Fighting Back in 2026
On this page
In February 2024, a finance employee at the Hong Kong office of multinational firm Arup wired $25M to fraudsters after attending a video call where every participant — including the CFO — was a real-time deepfake. That single incident, widely covered at the time, was the first publicly confirmed case of a deepfake-driven business email compromise that broke the eight-figure threshold.
By 2025, those cases stopped being newsworthy because they happened weekly. The Deloitte Center for Financial Services projected in early 2024 that deepfake-related losses would reach $40 billion by 2027. The actual number for 2025 came in higher than that projection — Visa's fraud-trends report logged $43.7B in losses globally that they attributed at least partially to deepfake content. The trajectory hasn't slowed in 2026.
This piece is for trust-and-safety leaders, fraud-prevention teams, security architects, and execs who need to understand what's happening, what defenses work, and what's still unsolved. We'll cover the major attack vectors, the industries getting hit hardest, the response patterns of the companies that have stopped most of the bleeding, and the open problems.
The attack vectors, in order of dollar volume
Five categories account for most of the loss:
1. CEO fraud and business email compromise (BEC)
The Arup case is the canonical example. The pattern: fraudsters gather public video and audio of a senior executive (LinkedIn videos, earnings calls, conference talks, podcast appearances), train a deepfake model, and impersonate that executive on a video call to authorize an urgent wire transfer.
Why it works:
- Senior executives have unusually high publicly-available training data.
- Wire approval workflows in many companies still rely on visual confirmation rather than out-of-band authentication.
- Urgency and authority compound — an "urgent" request from a "CFO" overrides the cautious instincts of mid-level employees.
Companies that have hardened against this: every major bank, every Fortune 500, and a growing share of mid-market companies. The fix is procedural, not technical — wire approvals over a certain threshold now require two out-of-band confirmations using pre-arranged channels (a second video call on a different platform, a callback to a known phone number, or a physical token-based confirmation). When the procedure is enforced, the deepfake doesn't help.
2. Account takeover via deepfake KYC
Banks, fintechs, crypto exchanges, gambling platforms, and other regulated services use Know Your Customer (KYC) flows that typically include a "liveness check" — the user films themselves on camera, sometimes performing prompted actions, to prove they're a real person matching the ID they submitted.
In 2024, basic deepfake injection attacks (replacing the camera feed with a pre-recorded deepfake) defeated 90%+ of consumer-grade liveness checks. In 2026, providers like Onfido, Persona, Jumio, and Veriff have hardened their liveness systems but the arms race continues. The high-end attacks now use:
- Real-time face swaps that replace the attacker's face with a target's face during the live capture, so prompted-action challenges get answered correctly.
- 3D avatar puppets — the attacker drives a high-fidelity 3D model of the target's face from their own webcam.
- Synthetic identities — entirely fabricated people whose faces don't correspond to anyone real, paired with fabricated ID documents.
Defenses that are working:
- Multi-modal liveness combining video, depth (TrueDepth, structured light, or stereo), and challenge-response tasks.
- Device-bound credentials so even if the deepfake passes KYC once, the account is bound to a specific device.
- Cross-reference signals — IP reputation, device fingerprint, session behavior — that catch synthetic identities even when the deepfake is technically convincing.
3. Insurance fraud
This is the category growing fastest in 2026. Auto, home, and health insurance carriers are now seeing AI-generated damage photos (a "totaled" car, a "flooded" basement, a "broken" appliance) submitted as part of claims. Carriers report that synthetic-image submissions roughly tripled between 2024 and 2025.
The economic incentive is straightforward. Generating a convincing damage photo with Midjourney or DALL-E costs roughly $0. A fraudulent claim that gets approved can be worth $5,000–$50,000. Even a 5% success rate on filed claims is wildly profitable for fraud rings.
What works:
- Image-detection API in the claim intake flow. Every photo gets scanned. Suspicious photos route to special investigation. The combination of low cost (a few cents per claim) and high catch rate (90%+ on AI-generated photos) makes this an obvious add.
- EXIF and device-signature checks. Real damage photos taken with the policyholder's phone have device signatures that match the policyholder's history. Photos taken on a different device than the policyholder normally uses are a yellow flag.
- Second-image consistency. Real claims usually involve multiple photos from multiple angles. Fraud rings often submit one or two; or, when they submit many, the photos have inconsistencies (lighting, time of day, geographic markers) that real photos from a single event wouldn't.
- Claims-graph analysis. Fraud rings file claims across multiple policies, multiple carriers, sometimes multiple countries. Industry-wide databases catch patterns that no single carrier can see.
Most major P&C carriers now run AI-detection APIs on every photo at intake. The economics make it close to mandatory: the cost of detection per claim is under $0.10; the savings per fraud caught are five figures or more.
4. Identity-based scams (romance, grandparent, voice cloning)
Consumer-side fraud, which doesn't make Fortune-500 fraud reports but adds up across millions of victims:
- Romance scams with deepfaked photos and videos building a fake relationship, eventually leading to a financial request.
- "Grandparent" scams using voice-cloning to impersonate a relative in distress.
- Crypto and investment scams featuring deepfaked celebrities endorsing the scheme.
The FTC's 2025 consumer fraud report attributed $2.7B of US consumer losses to identity-based scams that involved AI-generated content. The trend curve is steep.
Defenses are mostly downstream of detection: platforms are adding warning labels to suspicious content, banks are slowing high-risk transfers, and consumer-education programs are growing. But on the detection side, the answer is the same — APIs that catch synthetic content in user-facing media.
5. Election and reputation deepfakes
Not directly a financial fraud category but worth flagging. Synthetic content depicting public figures saying or doing things they didn't do has become a routine campaign-cycle event. Detection here matters less for direct loss prevention and more for democratic resilience. The same APIs and techniques apply.
Which industries are getting hit
The Visa $43.7B figure for 2025 breaks down roughly:
- Banking and payments — about 35% of total losses, mostly via account takeover and BEC.
- Insurance — about 22%, growing fastest. P&C plus life/health.
- Crypto and digital assets — about 18%, mostly account takeover and synthetic-identity scams.
- E-commerce and marketplaces — about 10%, from product-listing fraud, fake reviews with AI-generated photos, and seller-account takeover.
- Travel and hospitality — about 6%, from booking fraud and chargeback fraud with AI-generated supporting documents.
- All other — about 9%, including telecom, government services, healthcare, and education.
The categories getting hit hardest are the ones where (1) image or video evidence is part of the workflow, and (2) automation has stripped human review out of the routine cases. Any time the fraud surface is "submit a photo and an algorithm decides," there's a deepfake attack waiting.
What's actually working
Pattern across the companies that have stopped most of the bleeding:
Detection APIs run by default at every image and video intake point. Fraud detection used to be a "flag the suspicious cases" workflow. In 2026 the model is "scan everything, flag the few that look real-but-aren't." The economics make scanning everything obviously correct: cost per scan is a few cents; the cost of missing one fraud case is thousands.
Defense in depth. Companies that rely on a single signal (just liveness checks, just image detection, just device fingerprinting) are the ones still getting hit. The companies that have stopped most fraud combine 4-6 signals — detection API output, device signals, behavioral signals, cross-reference data, document forensics, and human review on borderline cases.
Procedural hardening. Wire-transfer approvals, payment changes, identity verification flows — every workflow that touches money or identity needs procedural friction that doesn't depend on visual confirmation alone. The deepfake exists; assume it will work; design around it.
Industry data sharing. No single company sees enough fraud to detect cross-organizational patterns alone. The industry sharing initiatives — bank consortiums in the US and EU, insurance fraud bureaus, crypto blockchain analysis firms — are catching multi-target fraud rings that any single member would miss.
Continuous re-training. Every detection method needs a feedback loop. When a fraud case is confirmed (or, worse, missed), the case needs to flow back into training data so the next attempt by the same fraud ring uses a slightly different pattern that the next-version detector also catches. The companies treating this as one-time deployment fail; the companies treating it as ongoing operations win.
The integration pattern
The architecture for an API-driven detection layer looks roughly like this:
- Intake. User submits content — photo for a claim, video for KYC, image with a marketplace listing.
- Detection scan. Every piece of content goes through an image/video detection API. Latency is sub-200ms for images, a few seconds for video.
- Risk score combination. Detection score gets combined with other signals (device, IP, history, account age) into a single risk score.
- Routing. Low-risk goes through normal flow. Mid-risk gets queued for additional verification. High-risk gets flagged for SIU or auto-rejected.
- Outcome logging. Whatever happens next — claim approved, claim denied, fraud confirmed — flows back into the data pipeline.
- Periodic recalibration. Detection thresholds, risk-score weights, and downstream policies all get retuned monthly or quarterly based on what was caught and what was missed.
Companies that follow this pattern at scale typically reach 85-95% reduction in image-fraud losses within 6-12 months of deploying. The remaining losses are sophisticated cases that no automated system catches alone — those require human investigators with the right tooling.
If you're building this architecture, our AI Image Detector API is designed to slot into step 2 specifically. Sub-100ms latency, calibrated confidence scores, model-attribution data, and webhook-based async batch processing for high-volume pipelines. We've also written a pipeline architecture guide for AI image moderation that goes deeper on the details.
What's still unsolved
Some honest acknowledgments of the limits in 2026:
Adversarial deepfakes engineered against specific detectors. When a fraud ring has deep technical resources and a specific target detector to defeat, they can usually generate content that the detector misses. The defense is the same as in security generally: assume any single detector will eventually be defeated; combine multiple uncorrelated signals; rotate methods periodically.
Real-time video manipulation in live calls. Real-time face swaps in video conferences are getting good enough that detection during the call (vs analyzing the recording afterwards) is hard. Some platforms are deploying client-side detection that runs on the recipient's device; this is promising but immature.
Cross-modal fraud. Fraud that combines a deepfake with social engineering and supplementary fake documents is harder to catch with detection alone. Deepfake-only detection scoring of the visual artifact misses the broader pattern.
Edge cases at scale. Even at 99% detection accuracy, in a pipeline processing 10M images per day, 100,000 of them are mis-classified daily. Whether those are 100K false positives (annoyed customers) or 100K false negatives (missed fraud), the absolute numbers matter. There is no escape from human review for the borderline cases; the question is just how small you can shrink the borderline.
Provenance gaps. C2PA adoption is real but still spotty. Major-platform images frequently strip C2PA metadata in their re-encoding. The "every real image is signed" world is closer than it was in 2023, but it's not 2026's reality yet.
What to do this quarter if you're not yet protected
A pragmatic prioritization, ranked by ROI:
- Audit every workflow that accepts user-uploaded images or video. List them. Estimate fraud-loss exposure for each. Don't just look at obvious ones (claims, KYC); look at marketplace listings, profile photos, support tickets, expense reports.
- Deploy a detection API at the highest-exposure workflows first. Most providers offer free tiers and pay-as-you-go pricing that lets you stand up the integration in a week. The cost of not having it is now far higher than the cost of running it.
- Add procedural friction to high-value workflows. Wire-transfer thresholds, payment-account changes, identity-verification re-confirmations. These are cheap and high-impact.
- Stand up cross-team incident review. Fraud cases that touch multiple departments (SIU + claims + IT + legal) need a forum to share what's working. Most companies don't have this and lose institutional learning every time staff turns over.
- Buy or build the analytics. Not just point detection — the dashboards that show your fraud rate trend over time, by channel, by signal. Without those, you can't tell whether your detection is improving.
Frequently asked questions
How big is deepfake fraud, really?
Industry estimates for 2025 ranged from $40B to $50B globally, with 2026 projections trending higher. The actual number is hard to pin down because attribution is hard — many fraud cases involve deepfakes plus other techniques, and many companies don't disclose breakdowns publicly.
Is most deepfake fraud targeting individuals or companies?
Both, but the dollar concentration is in B2B. Individual-targeted scams (romance, grandparent, voice cloning) account for thousands of dollars per victim and millions of victims; B2B fraud (BEC, KYC compromise, insurance fraud) accounts for hundreds of thousands or more per case across hundreds of thousands of cases.
Are smaller companies safer than large ones?
No. Smaller companies often have weaker procedural controls and less detection investment, making them attractive targets. The well-publicized victims tend to be large because the cases are big enough to make news; the per-incident frequency is roughly proportional to company size, but the per-incident fraud rate is often higher at smaller companies.
How much does running a detection API cost at our volume?
Pricing varies by provider and volume. As a rough benchmark, our Pro tier at $49/month covers 50,000 image scans — enough for most mid-market companies' insurance, KYC, or marketplace flows. Enterprise volumes (millions of scans monthly) typically run $0.001-$0.005 per scan after volume discounts. The ROI is essentially always positive once your fraud-loss exposure is over a few thousand dollars per month.
What detection accuracy should we require?
For high-value workflows, 95%+ accuracy on AI-generated content with a calibrated confidence score is the floor. More important than peak accuracy is the false-positive rate at the threshold you'll actually deploy at — a 99% accuracy detector with 5% false positives at production threshold creates more noise than signal.
How quickly can we deploy?
A first-pass integration takes a few days for a small engineering team. Wiring detection into intake, plumbing the score into your existing fraud risk model, and standing up monitoring takes another 2-4 weeks. Most companies see initial fraud-rate improvement within 30-60 days of deployment.
The deepfake-fraud problem is large, growing, and partially solvable. The defenses that work are not exotic — detection APIs, procedural hardening, multi-signal risk modeling, cross-team incident review. The companies that have deployed those defenses are stopping 85-95% of attacks. The companies that haven't are funding the fraud-ring economy.
If you're protecting an image- or video-intake workflow, our free API tier is built specifically for fraud-prevention integration: sub-100ms latency, calibrated scores, batch processing, and audit logging. Stand up an integration in a week and start measuring your loss reduction.
Try the AI Image Detector API
500 free scans per month. No credit card. Sub-100ms detection with model attribution and region heatmaps.
Get an API key →