The Money Overview

Protecting bank accounts from the newest wave of AI-driven scams

The call sounds exactly like your bank. The voice is calm, professional, and knows your name. It warns about suspicious activity on your account and asks you to confirm your identity. But the person on the other end is not a bank employee. In a growing number of cases, the voice itself may be synthetic, generated in seconds by AI software trained on a few snippets of recorded speech.

Across the country, criminals are using generative AI to clone voices, craft polished phishing messages, and impersonate financial institutions with a precision that would have been unthinkable five years ago. The financial toll is enormous: Americans reported $12.5 billion stolen through fraud in 2024, according to the Federal Trade Commission, a sharp jump from the year before. And as of May 2026, federal and state regulators say the threat has only intensified.

How AI supercharges bank fraud

In a December 2024 public service announcement, the FBI’s Internet Crime Complaint Center identified four specific ways criminals are weaponizing generative AI for financial fraud:

  • Scaling social engineering. AI lets a single scammer generate thousands of personalized messages, each tailored to a target’s name, location, or recent transactions.

  • Eliminating telltale errors. Phishing emails that once arrived riddled with typos now read like polished corporate correspondence.

  • Breaking language barriers. AI-powered translation allows fraud rings to target victims in any language, expanding their reach far beyond English-speaking populations.

  • Creating deepfakes. Realistic fake audio, video, and documents make it possible to impersonate a bank’s fraud department on a phone call or even a video chat.

Each capability strips away one of the gut checks people have long relied on to spot a scam. A voicemail that sounds identical to a bank representative, a phishing email with flawless grammar, a video call featuring a face that matches a real employee: these are no longer rare edge cases.

A separate FBI alert from August 2024 described a related scheme in which criminals pose as bank representatives to trick customers into surrendering chip-enabled debit cards or card data. That warning did not explicitly mention AI tools, but the playbook fits seamlessly with what generative AI enables: a convincing cloned voice, a spoofed caller ID, and a scripted sense of urgency designed to override a victim’s skepticism.

The FTC’s Consumer Sentinel Network Data Book for 2024, which compiles millions of consumer reports, confirms that impersonation scams rank among the most financially damaging fraud categories hitting Americans. The $12.5 billion total covers all reported fraud, not only AI-driven schemes, but impersonation, the category where AI has the most obvious impact, accounts for a significant share of those losses.

Regulators are raising alarms, but key gaps persist

State regulators have moved faster than most federal agencies to address the threat directly. In October 2024 guidance, the New York State Department of Financial Services warned banks and insurers about cybersecurity risks tied to artificial intelligence, flagging deepfake-driven fraud, social engineering, and weaknesses in authentication systems. The letter recommended layered defenses, including multi-factor authentication and zero-trust security models, and cited real incidents to show these threats have already materialized, not in theory, but in practice.

At the federal level, the FTC proposed new protections in February 2024 specifically targeting AI-powered impersonation of individuals. The proposal drew a direct line between the growth of AI-generated content and the expansion of scam operations. More than two years later, the rule’s final status is not reflected in publicly available records as of May 2026, a delay that itself underscores how quickly the technology is outpacing the regulatory response.

What neither agency has provided is the data that would let anyone measure the AI-specific slice of the problem. The FBI’s warnings describe methods and trends but do not attach dollar losses or complaint counts to AI-driven schemes specifically. The FTC’s $12.5 billion figure covers the full fraud landscape. Without that breakdown, precise claims about the percentage of bank fraud attributable to AI remain speculative.

Banks, for their part, have been largely silent. No major institution has publicly disclosed how many of its customers have been targeted by AI-generated impersonation attacks, and the NYDFS guidance, while detailed in its recommendations, does not name specific banks or cite enforcement actions. That opacity leaves consumers unable to judge which institutions are most exposed and which are investing most aggressively in defenses.

There is also an open question about how well existing security measures hold up. The NYDFS recommends multi-factor authentication and zero-trust frameworks, and some banks have begun deploying AI-based fraud detection tools that analyze voice patterns and behavioral signals during calls. But no public data yet confirms how effective those defenses have been against deepfake voice phishing or AI-crafted social engineering specifically.

What bank customers should do right now

Federal and state guidance converges on several concrete steps that can reduce exposure immediately.

Verify every unexpected contact independently. If someone calls, texts, or emails claiming to represent your bank, do not engage on their terms. Hang up and call the institution directly using the number printed on the back of your debit card or listed on the bank’s official website. Never use a phone number or link provided in the suspicious message itself.

Turn on multi-factor authentication today. The NYDFS guidance specifically recommends MFA as a critical defensive layer. If your bank offers it and you have not enabled it, do so now. MFA will not stop every attack, but it forces an attacker to clear an additional hurdle that automated AI-driven schemes struggle to bypass without significant extra effort.

Report suspicious activity, even if you did not lose money. Anyone who suspects they have been targeted can file a report through the FTC’s fraud reporting portal. If personal information has been compromised, IdentityTheft.gov provides step-by-step recovery guidance. These reports feed the databases regulators use to track emerging threats, so filing one helps far more than just the individual victim.

Treat urgency as a red flag, not a reason to act. The common thread across AI-powered scams is manufactured pressure. A legitimate bank will never demand that you act immediately over the phone, hand your card to a courier, or transfer funds to a “safe” account. Any communication that pushes you to move before you can think is suspicious, no matter how polished it sounds.

Limit what you share publicly. Voice-cloning tools need only a few seconds of audio to generate a convincing replica. Social media videos, voicemail greetings, and even conference recordings can supply that raw material. Reviewing your public digital footprint is now a basic security step.

Why this problem is likely to get worse before it gets better

The $12.5 billion in reported fraud losses for 2024 almost certainly understates the real damage, since many victims never file reports. Generative AI tools are becoming cheaper, more capable, and more accessible with each passing quarter, which means the techniques the FBI warned about in late 2024 are only easier to deploy at scale now.

The gap between what regulators know about AI-driven bank fraud and what they have publicly quantified is real. Until agencies begin publishing AI-specific complaint data, and until banks disclose more about the attacks they are seeing and the defenses they are deploying, consumers are left to protect themselves with the tools available.

Those tools work, but only if people use them before the money is gone. Verify before you trust. Authenticate before you transact. And if a call from your bank feels even slightly off, hang up. The real bank will still be there when you call back.

Avatar photo

Daniel Harper

Daniel is a finance writer covering personal finance topics including budgeting, credit, and beginner investing. He began his career contributing to his Substack, where he covered consumer finance trends and practical money topics for everyday readers. Since then, he has written for a range of personal finance blogs and fintech platforms, focusing on clear, straightforward content that helps readers make more informed financial decisions.​