The Money Overview

AI-driven credit card fraud cases surge 46% year over year

The call came on a Tuesday afternoon in late 2025, and it sounded exactly like a bank’s fraud department: the same hold music, the same automated prompts, the same calm urgency. A retired schoolteacher picked up, followed the instructions, and read off her card number, expiration date, and three-digit security code. Within hours, more than $4,000 had been drained from her account. Investigators later determined the caller had used a voice-cloning tool to replicate the bank’s phone system. Her experience, as described in patterns documented across thousands of FTC Consumer Sentinel complaints, is not an outlier. It is the new normal.

Credit card fraud powered by artificial intelligence is accelerating at a pace that has caught regulators, banks, and consumers off guard. In March 2025, the Federal Trade Commission reported that Americans lost a record $12.5 billion to fraud in 2024, the highest annual total the agency had ever recorded. Within that surge, analyses by cybersecurity firms and financial industry groups, including research from the Identity Theft Resource Center and fraud-detection companies tracking Sentinel complaint patterns, estimated that credit card fraud cases involving AI tools jumped roughly 46% compared with the prior year. Now, in spring 2026, that figure remains one of the most cited data points in the ongoing debate over how quickly criminals have weaponized generative AI and how prepared the financial system is to respond.

What the federal data actually shows

The $12.5 billion loss figure is the hardest number in this story. It comes from the FTC’s Consumer Sentinel Network, a database fed by fraud complaints from consumers, law enforcement, and partner agencies across the country. The agency publishes annual totals in its Data Book and makes quarterly breakdowns available through its Explore Data portal. Credit card fraud and identity theft consistently rank among the most frequently reported categories. People who suspect identity theft file through identitytheft.gov, while those dealing with unauthorized charges submit reports at reportfraud.ftc.gov.

What the data confirms without ambiguity is scale. A $12.5 billion loss year represents real money taken from real people. Updated Sentinel figures covering 2025 are expected later in 2026 and will offer the next clear measure of whether the acceleration has continued.

Where the 46% figure comes from, and what it does not tell us

The 46% year-over-year estimate, widely cited since it surfaced in early 2025, carries important caveats. The FTC’s complaint system tracks volume and dollar losses by fraud category, but it does not tag individual cases by the technology a scammer used. There is no official field that labels a complaint “AI-driven” versus “non-AI-driven.”

The figure instead originates from secondary analyses by cybersecurity firms and financial industry groups that infer AI involvement based on telltale patterns: synthetic identities stitched together from stolen and fabricated data, deepfake voice calls that bypass phone verification, and automated phishing campaigns that generate thousands of personalized messages in minutes. Those inferences draw on proprietary detection tools and case reviews, not a single transparent public dataset.

This distinction matters. A scammer who buys stolen card numbers on a dark-web marketplace and uses them manually lands in the same Sentinel category as one who deploys a large language model to craft convincing phishing emails at industrial scale. The complaint data captures the outcome, not the method. No direct statement from an FTC official has specifically attributed the credit card fraud increase to artificial intelligence. Commentary from cybersecurity researchers and banking executives has filled that interpretive gap, but those voices represent informed analysis, not official findings.

There is also a measurement wrinkle that cuts the other direction. As banks roll out their own AI-based fraud detection systems, they flag more suspicious transactions, which can generate more consumer complaints even if the underlying rate of attempted fraud has not risen proportionally. Untangling that feedback loop requires raw transaction-level data that neither the FTC nor most banks release publicly.

How AI is changing the fraud playbook

Even with the statistical caveats, the operational evidence is hard to dismiss. Generative AI has lowered the cost and skill barrier for fraud in at least three concrete ways.

Synthetic identities. Criminals use AI to combine real Social Security numbers, often harvested from large-scale data breaches, with fabricated names, addresses, and employment histories. The resulting identities pass automated credit checks with alarming reliability. The Federal Reserve Bank of Boston has flagged synthetic identity fraud as one of the fastest-growing threats in the U.S. payments system.

Phishing at scale. Large language models can produce grammatically polished, context-aware emails and text messages that are far harder to spot than the typo-riddled scam messages of a few years ago. IBM’s X-Force Threat Intelligence Index has documented a measurable increase in phishing campaigns that show hallmarks of AI-generated text, including personalized references to recent purchases or account activity.

Voice and video deepfakes. Off-the-shelf voice-cloning tools can replicate a person’s speech patterns from just a few seconds of sample audio. Fraudsters have used cloned voices to impersonate bank representatives and even family members in distress, pressuring victims into sharing card details or authorizing transfers.

None of these techniques existed at meaningful scale five years ago. Together, they help explain why industry observers believe AI is a significant driver of the fraud spike, even if the precise share remains unquantified by federal data.

What banks and card networks are doing

The payments industry has responded with its own AI arms buildup. According to the company, Visa reported in its 2025 biannual threat report that it blocked more than $40 billion in fraudulent transactions globally in the 12 months ending March 2025, leaning heavily on AI-powered authorization scoring. Mastercard has expanded its Decision Intelligence platform, which uses generative AI to evaluate transaction risk in real time by analyzing spending patterns, merchant data, and device signals simultaneously.

Major issuers including JPMorgan Chase, Bank of America, and Capital One have increased investment in behavioral biometrics. These systems track how a cardholder types, swipes, and holds a phone to distinguish legitimate users from imposters, adding a layer of defense that does not depend on static credentials a scammer could steal or clone.

Still, security experts caution that defense and offense are locked in an arms race with no finish line. Every improvement in detection gives fraudsters a new puzzle to solve, and generative AI gives them faster, cheaper tools to solve it.

What consumers should do now

Whether AI accounts for exactly 46% of the year-over-year increase or some other share, the practical threat is real and growing. One important baseline: under the Fair Credit Billing Act, consumers are liable for no more than $50 in unauthorized credit card charges, and most major issuers waive even that amount. Disputing fraudulent charges still takes time, stress, and vigilance. Several steps can meaningfully reduce exposure:

  • Treat unsolicited contact with suspicion. If a call, email, or text asks for card details, hang up or close the message and contact your bank directly using the number on the back of your card. AI-generated messages often sound polished and personalized, so watching for typos is no longer a reliable filter.
  • Enable real-time transaction alerts. Most major issuers offer push notifications for every purchase. Catching an unauthorized charge within minutes dramatically improves the odds of a successful dispute.
  • Freeze your credit when you are not actively applying for new accounts. A freeze at all three bureaus (Equifax, Experian, TransUnion) is free and blocks criminals from opening new lines of credit using a synthetic or stolen identity.
  • Use virtual card numbers for online purchases. Several issuers now generate single-use or merchant-locked card numbers that limit exposure if a retailer’s database is breached.
  • Report fraud immediately. Filing through identitytheft.gov creates an official recovery plan and notifies creditors. Submitting a complaint at reportfraud.ftc.gov feeds the Consumer Sentinel database that law enforcement agencies use to identify patterns and build cases.

The bottom line

Federal data confirms that 2024 was a record-breaking year for fraud losses in the United States. Credible, though not yet officially quantified, evidence points to AI as a major accelerant in credit card fraud. The tools to forge identities, mimic voices, and mass-produce convincing scam messages have become cheap and accessible. Updated FTC figures expected later in 2026 will be the place to watch for harder numbers.

Until then, the gap between what is confirmed and what is inferred should push consumers toward vigilance, not complacency. The $12.5 billion lost in 2024 was not abstract. It came out of individual bank accounts, credit lines, and retirement savings, one compromised card at a time.

Avatar photo

Daniel Harper

Daniel is a finance writer covering personal finance topics including budgeting, credit, and beginner investing. He began his career contributing to his Substack, where he covered consumer finance trends and practical money topics for everyday readers. Since then, he has written for a range of personal finance blogs and fintech platforms, focusing on clear, straightforward content that helps readers make more informed financial decisions.​