LOS ANGELES, CALIFORNIA — A Los Angeles woman says she lost her entire life savings after falling victim to a sophisticated AI-powered scam involving deepfake videos of soap opera star Steve Burton from General Hospital. The heartbreaking case highlights how artificial intelligence is increasingly being used to target vulnerable people with convincing fake content.
The Scam: AI Deepfakes Posing as Steve Burton
Abigail Ruvalcaba told local media outlets that scammers first reached out to her through Facebook Messenger, posing as actor Steve Burton, who has played Jason Morgan on General Hospital since 1991. The scammers eventually moved the conversation to WhatsApp, where they used AI-generated videos and voice deepfakes to make their deception even more convincing.
“I thought I was in love,” Ruvalcaba told KTLA News. “I thought we were going to have a good life together.”
The scammers reportedly told her that Burton’s home had been destroyed in a fire and that he needed financial help. One of the fake messages read: “The beach house is something we will love, baby.”
How Much Money Was Lost
At first, Ruvalcaba sent more than $81,000 of her own money. But as the fake relationship continued, she went further — selling her family’s condo and handing over the $350,000 proceeds to the scammers. In total, the losses amounted to more than $430,000.
Her daughter, Vivian, explained to reporters that her mother had been diagnosed with Bipolar Disorder and was vulnerable to believing the false claims. “Now she’s in complete debt, and now she’s going to have to file for bankruptcy,” Vivian told local news outlets.
The Aftermath: Lawsuit and Possible Bankruptcy
The scam not only drained Ruvalcaba’s bank accounts but also put her housing situation in jeopardy. KTLA reported that the family is now suing in an effort to reverse the sale of the condo, which has since been flipped to another owner. Without relief, Ruvalcaba could be forced to leave her current home and file for bankruptcy.
Her daughter expressed frustration over how devastating the scam has been: “It’s destroyed our family financially, and we’re just trying to pick up the pieces.”
Steve Burton Responds
The real Steve Burton was shown one of the deepfake videos that Ruvalcaba received. After watching, he admitted: “Sounds like my voice for sure, 100%.”
Burton had previously taken to social media to warn his fans about impersonation scams, telling followers not to send money to anyone claiming to be him. Unfortunately, in Ruvalcaba’s case, the scammers’ use of AI-generated content was enough to convince her that she was speaking directly with the actor.
Experts Warn: AI Is Supercharging Scams
Cybersecurity experts say this case is just one example of how AI technology is making scams more convincing and dangerous. Deepfakes can mimic voices, facial expressions, and even personal mannerisms, making it harder for victims to distinguish between real and fake interactions.
Authorities warn that celebrities are increasingly being impersonated by scammers — particularly on platforms like Facebook, Instagram, and WhatsApp — to trick fans into sending money.
“Scammers are exploiting the trust that people place in celebrities,” cybersecurity analysts note. “With AI tools, they can create extremely realistic videos in minutes.”
How to Protect Yourself from AI Scams
Law enforcement officials and consumer protection agencies recommend the following steps to avoid falling victim to similar schemes:
- Verify directly: If you get a suspicious message claiming to be from a celebrity or loved one, confirm through official, verified social media accounts.
- Be skeptical of money requests: Legitimate celebrities will not ask fans for money, gifts, or personal financial support.
- Check for inconsistencies: Deepfakes often have slight lip-sync errors, strange backgrounds, or robotic pauses in speech.
- Report suspicious accounts: Social media platforms encourage users to flag and report impersonators.
- Consult with family members: Before making financial transactions, talk with relatives or trusted advisors.
Read Also: Michigan Pool Party Turns Deadly: 4 Stabbed, 2 Fatally, Suspect Charged with Murder
Why This Case Matters
This case out of Los Angeles underscores how AI-driven fraud is growing rapidly and how it can devastate individuals and families. It also raises questions about what responsibility tech companies and regulators should have in preventing AI-generated scams.
For Ruvalcaba and her family, the financial losses may take years to recover — if at all. Meanwhile, her story serves as a cautionary tale for others who may encounter similar impersonation attempts online.
Final Thoughts
Labor Day weekend often marks celebrations of hard work and rest for Americans, but stories like this highlight the darker side of technology when placed in the wrong hands. As AI becomes more sophisticated, so do the scams — and staying vigilant is more important than ever.
What do you think should be done to protect people from AI scams like this? Share your thoughts in the comments at ibwhsmag.com and join the conversation.