Chapter 09: AI Turned Fraud Into a $12.5 Billion Weapon

They Sound Like Your Daughter. They Look Like Your Bank. They Are Neither.

Jennifer DeStefano was standing in a dance studio in Scottsdale, Arizona, watching her younger daughter practice when her phone rang. On the other end, she heard her 15 year old, Brianna, sobbing. "Mom! I messed up." Then a man's voice cut in with threats so graphic they cannot be fully repeated here. He said he had Brianna. He demanded a million dollars.

Jennifer's knees buckled. She started screaming. Other parents at the studio scrambled to call 911 and Jennifer's husband. Four minutes later, they confirmed that Brianna was safe on her ski trip, completely unaware that her mother had just lived through the worst moments of her life.

That voice on the phone was not Brianna. It was a machine. An AI tool had cloned her daughter's voice using audio scraped from social media, and it reproduced her inflection, her cry, her panic so perfectly that a mother who had raised that child for 15 years could not tell the difference. Jennifer later testified before the United States Senate. "It was completely her voice," she said. "It was her inflection. It was the way she would have cried. I never doubted for one second it was her."

Here is what most Americans do not yet understand. What happened to Jennifer DeStefano is no longer rare. It is no longer expensive. It is no longer difficult. The same technology that cloned Brianna's voice is now available to anyone with an internet connection for free, and it needs as little as three seconds of audio to work. The Federal Trade Commission reports that American consumers lost $12.5 billion to fraud in 2024, a 25 percent jump over the prior year, and by the end of September 2025, losses had already hit $12.3 billion with an entire quarter still to go. The real number, adjusted for the vast majority of victims who never report, could reach $195.9 billion. 

Artificial intelligence did not create fraud. It made fraud so fast, so cheap, so emotionally convincing, and so personally targeted that the rules you grew up with no longer apply. This chapter will show you exactly how that happened, who is profiting, and what you and your family need to do about it starting today.

The Old Cons Got a New Brain

Fraud has existed for as long as humans have communicated. Confidence games, impersonation, forged documents, fake identities. None of that is new. What is new is the speed, the cost, and the precision that artificial intelligence brings to every single one of those old tricks.

Start with voice cloning. McAfee Labs found that modern AI tools produce an 85 percent match to a real person's voice from just three seconds of recorded audio. Feed the system more data and accuracy climbs to 95 percent. More than a dozen free voice cloning tools sit on the open internet right now, requiring nothing more than basic computer skills. ElevenLabs, one of the most popular commercial platforms, offers instant voice cloning from ten seconds of audio for about eleven dollars a month. University of California, Berkeley researchers tested whether people could tell the difference between a cloned voice and the real thing. Participants got it right only about 60 percent of the time. That is barely better than flipping a coin.

Now add deepfake video. Deepfake scams increased tenfold in 2024, and North America saw a staggering 1,740 percent spike. The average American now encounters 2.6 deepfake videos every single day. A convincing deepfake scam video costs as little as five dollars and takes under ten minutes to create. The deepfake robocall impersonating President Biden that disrupted the 2024 New Hampshire primary cost approximately one dollar to produce.

Then there is AI generated text. The days of spotting a scam email because of broken English and bizarre formatting are over. Today, 82.6 percent of phishing emails use some form of AI. A 2024 study found that AI generated phishing emails achieved a 54 percent click through rate, more than four times the success rate of traditional phishing attempts. Scammers now compose convincing emails 40 percent faster than they did before AI, and 40 percent of business email compromise messages in 2024 were AI generated.

These capabilities came together in a single terrifying case in January 2024. A finance employee at Arup, a multinational engineering firm, joined what he believed was a video conference call with his company's chief financial officer and several colleagues. Every person on that call was an AI generated deepfake. The employee, convinced he was following legitimate orders, made 15 transfers to five different bank accounts. Total loss: $25.6 million. Arup's chief information officer, Rob Greig, later tried to create a deepfake of himself using freely available software. It took him about 45 minutes.

The newest and most alarming development is the rise of AI agents, autonomous systems that run scams without a human operator watching over them. Social engineering attacks surged nearly threefold in 2025 as AI agents powered fraudulent call centers, managing dozens of conversations at once, adjusting tone and personality for each target. On the dark web, a growing marketplace sells scam software for as little as twenty dollars, giving low skilled criminals access to tools that automate entire fraud operations from the first contact to the final payment demand. Experian's 2026 forecast identifies these emotionally intelligent bots as a top threat, noting that a single bot can sustain dozens of simultaneous fake relationships, and each victim believes they are the only person receiving attention.

$12.5 Billion and Counting

The Federal Trade Commission released its fraud data in March 2025, and the numbers hit like a freight train. American consumers reported losing more than $12.5 billion to fraud in 2024, up from $10 billion in 2023, $8.8 billion in 2022, and $5.8 billion in 2021. The number of fraud reports stayed roughly the same at about 2.6 million. What changed was the percentage of victims who actually lost money, which jumped from 27 percent in 2023 to 38 percent in 2024. That means the scams are working more often. They are getting better. 

Investment scams alone accounted for $5.7 billion. Imposter scams drove $2.95 billion in losses. Government impersonation scams totaled $789 million. Job scams reached $501 million, up from just $90 million in 2020. Social media was the contact method that generated the highest total losses at $1.9 billion.

By the first three quarters of 2025, the picture grew even worse. The FTC reported $12.3 billion in losses before October, essentially matching the entire 2024 total with months still remaining. Investment scam losses reached $6.1 billion, already surpassing the full 2024 figure. Nearly 80 percent of people who reported an investment scam lost money, with a median loss of $10,000.

The FTC has responded with enforcement actions. Operation AI Comply, launched in September 2024, brought five simultaneous cases against companies using AI to deceive consumers. These included actions against Rytr LLC for generating fake reviews, DoNotPay for falsely claiming its chatbot was the world's first robot lawyer, and FBA Machine for defrauding consumers of more than $15 million through fake AI powered online storefronts. 

The Government and Business Impersonation Rule, finalized in April 2024, gave the FTC authority to seek monetary relief directly in federal court, with penalties up to $53,088 per violation. The FCC banned AI generated voices in robocalls and gave state attorneys general the authority to pursue legal action. In October 2025, the FTC's Consumer Reviews Rule made publishing fake reviews illegal with fines up to $51,744 per violation. 

Enforcement has continued under the current administration, with actions stopping Click Profit's $14 million AI passive income scheme and targeting Air AI for deceptive earnings claims. FTC Chairman Andrew Ferguson testified before Congress in May 2025 that the agency needs more resources, and bipartisan support emerged. The bottom line is that the government is working on this. The bottom line is also that the government cannot keep up.

Romance Bots That Never Sleep

The cruelest category of AI powered fraud targets the most human need of all: the need to be loved.

Romance scam losses exceeded $1.3 billion in 2024 according to FBI data. Adults over 60 lost an average of $19,000 per romance scam. Global reports of romance scams jumped 63 percent between 2024 and 2025. And the person on the other end of those messages, the one writing beautiful words and asking thoughtful questions and remembering the details of your life? Increasingly, that person does not exist. It is a bot.

AI powered romance bots now sustain months long emotional relationships without any human involvement at all. Large language models generate fluent, emotionally resonant text and maintain consistent personality traits across weeks and months of conversation. They eliminate the warning signs that used to tip people off: poor grammar, repetitive phrases, contradictory backstories. These bots analyze their targets' online behavior, track emotional states, and adjust their communication patterns in real time. Researchers at the Alan Turing Institute tested multiple AI systems as romance scam personas and found they could move through every stage of a romance scam with disturbing skill, flooding victims with affection and fabricating crises to extract money.

The grooming process typically unfolds over six to eight months. The bot or the scammer behind it develops a deep emotional bond before asking for a dime. The relationship feels permanent from the start, with talk of marriage and a shared future coming early. Tragic stories about accidents or losses create sympathy and urgency. Small gift requests test the waters before escalating to large sums. Researchers have documented how this pattern shares characteristics with domestic violence: distortion of reality, economic abuse, isolation, and fear.

Beth Hyland of Portage, Michigan, a recently divorced woman, matched with someone on Tinder whose profile was strikingly similar to hers. Within ten days, they were talking about falling in love. A brief video chat, later identified as an AI generated deepfake, quieted the small voice in her head that wondered if this was too good to be true. The scammer fabricated work travel to San Diego and then to Qatar, sharing fake receipts and checking in daily. Beth took out multiple loans and sent $26,000 in Bitcoin, more than a quarter of her retirement savings, before a $50,000 activation fee request prompted her financial advisor to identify the scheme. She was lucky someone intervened. Many victims have no one looking over their shoulder.

The scale of these operations is staggering. University of Texas researchers found that so called pig butchering networks, long term investment fraud operations that cultivate victims over months before draining their finances, moved more than $75 billion to cryptocurrency exchanges between January 2020 and February 2024. The FBI reports Americans lost $6.5 billion to cryptocurrency investment scams in 2024. The blockchain analytics firm Chainalysis found that AI enabled scams were 4.5 times more profitable than traditional fraud. Behind many of these operations is a human tragedy that mirrors the victim's own: the United Nations estimates more than 200,000 people are trapped in scam compounds across Southeast Asia, trafficked from 66 countries and forced to scam under threat of violence.

The FBI's Operation Level Up, launched in January 2024, notified 8,103 victims of crypto investment fraud. Seventy seven percent of them had no idea they were being scammed. The operation saved an estimated $511.5 million. Eighty victims were referred for suicide intervention. That last number should stop you cold. People are taking their own lives over this.

Fake Stores, Fake Reviews, Fake Everything

In February 2026, a fake website impersonating Italian hair care brand Davines appeared as a top sponsored Google search result. On a mobile phone, the site at davineas.com was nearly indistinguishable from the real brand. No misspellings. No clumsy graphics. Professional product descriptions and what appeared to be responsive customer service. A cybersecurity analyst identified telltale patterns in the site's code suggesting AI generation. As one security executive told a national news outlet: "It is the same scam. It is just cheaper to do it on a broader scale. And that means the return on investment is higher."

AI tools now allow scammers to create entire fake e commerce brands in minutes, complete with business histories, customer testimonials, and AI powered customer service chatbots that stall consumers with scripted excuses long enough to prevent chargebacks. One cybersecurity firm identified 100,000 AI generated websites impersonating nearly 200 different brands in 2025 alone. Security researchers used a popular AI website builder to create a fake Walmart store as a proof of concept, showing how AI generates product descriptions, images, reviews, business histories, terms of service, and privacy policies. Every signal that consumers rely on to evaluate whether a website is real can now be faked in minutes.

The fake review ecosystem makes the problem worse. An estimated 30 percent of all online reviews are considered fake. One study found that 3 percent of front page Amazon reviews were AI generated, and nearly three quarters of those were five star reviews carrying a verified purchase label. On Zillow, 23.7 percent of real estate agent reviews in 2025 were likely AI generated, up from 3.63 percent in 2019. Academic research has confirmed that people cannot tell the difference between an AI written review and a human written one. The FTC estimates that businesses buying fake reviews see a 1,900 percent return on investment. Fake review fraud costs global businesses $152 billion annually. And every fake five star review steers a real consumer toward a product or service that did not earn that trust.

Your Voice Is Not Your Own

Let me bring this back to the phone call that started this chapter, because the voice cloning scams targeting families represent something that goes beyond financial loss. They strike at the deepest emotional bonds we have.

After Jennifer DeStefano's case made national news, the reports poured in. Sharon Brightwell of Dover, Florida, lost $15,000 in hours after hearing what sounded exactly like her daughter April crying hysterically, claiming she had caused a car accident involving a pregnant woman. A second voice posed as an attorney demanding bail money. "I know my daughter's cry," Brightwell said. "There is nobody that could convince me that it was not her." A 17 year old Chinese exchange student named Kai Zhuang prompted an $80,000 ransom payment from his family in 2024 after scammers orchestrated a fake kidnapping. In Alabama, great grandparents Alice and Frank Boren were targeted by scammers who cloned their great grandson's voice and demanded $11,000 in bail money.

McAfee found that 53 percent of adults share their voice data online at least once a week through social media videos, voicemail greetings, and podcasts. Every one of those clips is a potential source for voice cloning. FBI Phoenix assistant special agent Dan Mayo put it plainly: "You have got to keep that stuff locked down. If you have it public, you are allowing yourself to be scammed."

These scams work because of basic neuroscience. When a parent or grandparent hears their loved one's voice in distress, rational thinking shuts down. The emotional realism of a cloned voice bypasses the skepticism that might catch a text based scam. Your brain does not pause to analyze. It reacts. And by the time you have time to think, the money is gone.

The financial toll on older Americans is devastating. FBI data shows Americans over 60 lost $4.88 billion to cybercrime in 2024, a 43 percent increase. The FTC's December 2025 report to Congress found that fraud losses among adults 60 and older have quadrupled since 2020, rising from roughly $600 million to $2.4 billion in 2024. Combined losses among older adults who each lost more than $100,000 increased eightfold, reaching $445 million in 2024. Amanda Senn, director of the Alabama Securities Commission, said the likelihood of recovering stolen money is "slim to none."

Forty six states have now enacted legislation targeting AI generated deepfakes, with 146 deepfake specific bills introduced in 2025 alone. The NO FAKES Act, reintroduced with bipartisan support in April 2025, would establish a federal right to your own voice and visual likeness. Tennessee's ELVIS Act was the first state law to prohibit using AI to mimic an artist's voice without permission. The FCC has ruled that AI generated voice calls are illegal robocalls under federal law. The legal framework is expanding. It is not expanding fast enough.

Data Brokers: The Scammer's Secret Weapon

Every AI powered scam becomes exponentially more convincing when the scammer combines the power of AI with your name, your spouse's name, your home address, your recent purchases, and your financial situation. I talked about data brokers earlier in the book but I wanted to circle back and highlight the fact that combining data brokers with AI is like pouring gasoline on a burning fire.

The scope of what these companies collect is almost impossible to overstate. Acxiom maintains up to 10,000 unique data points on more than 300 million Americans across 23,000 servers. The data available for purchase includes names, addresses, phone numbers, Social Security numbers, income levels, credit scores, family members' names, medical conditions, prescriptions, political affiliations, browsing history, purchase records, and precise geolocation data showing visits to health clinics, places of worship, and domestic violence shelters. The Consumer Financial Protection Bureau documented how brokers sell demographic categories with labels like "Economically anxious elders" and "behind on bills." Those are targeting maps for predators.

This is not a theoretical risk. In 2021, the Department of Justice charged Epsilon Data Management criminally for selling data on more than 30 million consumers to perpetrators of elder fraud schemes between 2008 and 2017. Epsilon's employees knowingly sold lists to fraudulent mass mailing operations running fake sweepstakes and astrology solicitations that disproportionately affected elderly and other vulnerable people.

One of Epsilon's clients defrauded 218,000 victims of $23.7 million, and 12,000 of those victims were defrauded more than 20 times each. Epsilon agreed to pay $150 million. Two former executives went to federal prison. Earlier, another data company called InfoUSA sold a list of 19,000 elderly sweepstakes players to scam artists who stole over $100 million. A researcher named Joanna Moll purchased one million online dating profiles from a data broker for less than $150. One million profiles. For the price of dinner for two.

California has taken the most aggressive action. The Delete Act launched the DROP platform on January 1, 2026, covering more than 500 registered brokers and allowing residents to request deletion with a single action. California's privacy agency also launched a Data Broker Enforcement Strike Force in late 2025. A February 2026 Joint Economic Committee report found that data broker breaches have cost American consumers approximately $20.8 billion. The CFPB proposed a federal rule in December 2024 that would have treated data brokers as consumer reporting agencies and limited the sale of Social Security numbers, phone numbers, and financial data. That rule was quietly withdrawn in May 2025 under the new administration.

The Tipping Point

On January 13, 2026, Experian released its annual Future of Fraud Forecast, and the language was unusually blunt. Fraud, Experian said, will reach a "tipping point" in 2026 that will force major conversations about liability, regulation, and the role of AI agents in digital commerce. Experian identified five specific threats for the coming year, with the top threat being what they call "Machine to Machine Mayhem," the collision between legitimate AI agents that consumers are starting to use for shopping and transactions and malicious AI agents deployed by fraudsters. As consumers hand over more decisions to AI, businesses face the nearly impossible challenge of telling a good bot from a bad one.

Experian is not alone in sounding the alarm. Deloitte's Center for Financial Services projects AI powered fraud losses will climb to $40 billion by 2027, compounding at 32 percent annually. TransUnion's 2025 Global Fraud Report found that surveyed firms lost an average of 7.7 percent of revenue to fraud, with U.S. companies losing 9.8 percent. One cryptocurrency analytics firm reported $14 billion in crypto scam losses in 2025, with AI enabled operations proving 4.5 times more profitable than traditional fraud. For as little as $50 per month, anyone can now access phishing kits, mule networks, automation tools, and synthetic identity creation software. Warren Buffett called AI enabled fraud "the growth industry of all time." He was not joking.

Your Family's Defense Plan

Here is what I need you to understand. The old rules are gone. "Look for spelling errors" does not work when AI writes flawless English. "Do not send money to strangers" does not work when the stranger sounds exactly like your granddaughter. Protecting your family in 2026 requires a completely new way of thinking about trust.

The single most important step your family can take today is establishing a safe word. This is a unique code known only to your family members, one that must be spoken in any emergency call before anyone sends money or takes action. Do not pick a pet's name, a street address, or anything that could be found on social media. Make it random. Make it memorable. And make sure every person in your family, especially your parents and grandparents, knows it by heart. The Identity Theft Resource Center's CEO, Eva Velasquez, confirms that family safe words are a genuinely effective tool when used properly.

Pair your safe word with a strict callback protocol. If you receive a panicked call from a family member, hang up. Call that person directly using a number already saved in your phone. Never call a number provided by the caller. As law enforcement agencies across the country advise, a genuine emergency will still be an emergency five minutes from now.

Understanding why scams work is itself a form of defense. AI powered fraud targets specific cognitive weaknesses that every human being shares. Authority bias makes you comply with perceived authority figures like banks or police. Urgency and panic impair your prefrontal cortex's ability to think rationally. Loss aversion drives you to act rashly to avoid losing something. Truth bias makes you assume other people are telling the truth by default. 

Research shows that falling for scams has nothing to do with intelligence. Studies find people aged 35 to 44 are among the most likely to be victimized, and those aged 18 to 24 lost the most money. More than half of consumers told AARP researchers they are "somewhat or very confident" they could detect AI fraud. AARP's Kathy Stokes calls that dangerous overconfidence: "By its nature, AI is capable of making fraud attempts imperceptible." The most dangerous belief any of us can hold is "it cannot happen to me."

Lock down your social media. Set every profile to private. Reduce the amount of audio and video you share publicly. McAfee found that 53 percent of adults post their voice data online at least once a week. Every public video, every voicemail greeting, every podcast appearance is potential raw material for a voice clone.

Set up financial safeguards. Turn on real time transaction alerts for all your accounts. Set daily spending and transfer limits. If you have elderly parents or grandparents, consider shared visibility into their financial accounts so unusual activity gets spotted fast. Establish as a family rule that no one ever sends money through gift cards, wire transfers, cryptocurrency, or payment apps in response to an unsolicited request. No legitimate entity will ever ask for payment in those forms.

Freeze your credit at all three bureaus. This is free. It costs nothing to place and nothing to lift. Call Equifax at 888 378 4329. Go to experian.com/freeze. Call TransUnion at 888 909 8872. A credit freeze prevents anyone from opening new accounts in your name.

Look into deepfake detection tools. McAfee's Deepfake Detector, Trend Micro Check, and Reality Defender all offer consumer grade scanning for AI generated audio and video. The market for these tools is growing rapidly.

Get your data out of broker databases. If you live in California, use the new DROP platform launched January 1, 2026, to request deletion from more than 500 registered data brokers with a single action. Services like McAfee Personal Data Cleanup and DeleteMe can automate the process across multiple brokers for anyone in the country.

Talk to the older adults in your life. Have these conversations regularly, not once. Make clear that no legitimate bank, government agency, or law enforcement officer will ever demand immediate payment, request gift cards, or insist on secrecy. Remove the shame from the conversation. AARP's 2025 research found that 90 percent of Americans now recognize that anyone can become a victim. That recognition is the first and most important line of defense.

If you or someone you love is targeted, report it. File a complaint with the FTC at ReportFraud.ftc.gov. File with the FBI's Internet Crime Complaint Center at ic3.gov. Call the DOJ Elder Justice Hotline at 1 833 FRAUD 11. Call the AARP Fraud Watch Network Helpline at 877 908 3360. Reporting matters because it builds the data that drives enforcement, and it may help prevent the next victim.

What You Do Next Matters

The numbers in this chapter tell one story. From $5.8 billion in reported fraud losses in 2021 to $12.5 billion in 2024, with 2025 on pace to shatter that record. Behind every dollar is a person. A grandmother who heard her grandchild's voice begging for help. A divorced woman who found what she believed was love and lost her retirement savings. A shopper who trusted a Google search result that led to a website built by a machine in under ten minutes.

The $12.5 billion figure is not the ceiling. It is the floor. Experian, Deloitte, the FBI, and the FTC all agree that losses will keep climbing as AI tools become cheaper, faster, and more convincing. Doing nothing is not a neutral choice. It is a decision to leave yourself and the people you love unprotected.

Set your safe word tonight. Lock down your social media this weekend. Freeze your credit tomorrow morning. Sit down with your parents and your kids and have the conversation that could save them from the worst phone call of their lives. These are not complicated steps. They are the kind of thing you can do in 30 minutes. And those 30 minutes could be the difference between catching a scam and losing everything.

Your privacy. Your voice. Your family's financial security. These are worth fighting for.