Scams & Phishing News Scam compounds hiring “AI models” to seal the deal in deepfake video calls

Brownie2019

Level 23
Thread author
Verified
Well-known
Forum Veteran
Mar 9, 2019
969
4,663
2,168
Germany
Scam compounds in Southeast Asia have already become modern slave farms, trapping victims and forcing many of them to become scammers for them. Now they’ve added another type of worker to the mix: so-called AI models.

These professional scammers conduct video calls with their targets, charming them into handing over their cash. As reported in WIRED this week, recruitment ads describe roles handling around a hundred live video calls per day, promoting romance scams and crypto hustles in industrial-scale scam operations across Cambodia, Myanmar, and Laos.

These scam farms already rely on chat operators to ensnare scam victims via messaging apps. Many of these operators are themselves victims of trafficking, forced to work long shifts under threats of violence. They develop relationships with victims over time, exploiting loneliness or financial worries. While they work to make a victim feel special, they’re actually juggling similar text sessions with dozens of people at once. Eventually, a victim may want a video call, either to meet their imagined sweetheart or to confirm an investment opportunity is legitimate (or both).

Chat operators might not have the ability to charm victims on video, especially when they’re victims themselves, being made to work long shifts and are physically beaten. So when a victim asks for a video call, the scam bosses call in a specialist “AI model” with strong interpersonal skills to charm the victim. Despite the name, they’re real people hired to appear on video calls. The AI deepfake software adjusts their looks to match the fictionalized person that the victim is hoping to see.

Scam operations run recruitment ads for these models, and many seem willing to apply for these jobs. Humanity Research Consultancy, an investigative research group that tracks trafficking supply chains, identified a pitch from a 24-year-old Uzbekistani calling herself Angel. She claimed to speak four languages and to have a year’s experience as an AI model. She demanded $7,000 monthly for her services.
Read more here:
 
Read more here:
This is a very credible evolution of the same scam model, just with better social engineering tools.

What stands out here is not only the use of deepfake-assisted video calls, but the industrialized workflow behind it:

  • text operators build trust over time
  • specialists take over when a victim requests a live call
  • face/voice manipulation is used to match the fake identity
  • the final goal is still the same: romance fraud, investment fraud, or both

Why this matters

For many people, a live video call still feels like “proof” that the person is real. That is no longer a reliable assumption.

Even without fully automated AI avatars, a real human using appearance-altering software can be persuasive enough to remove the victim’s last doubts. In practice, this makes pig butchering and related scams even harder for ordinary users to detect.

Important takeaway

A video call should no longer be treated as identity verification by itself.

If money, crypto, remote access, or sensitive personal information is involved, safer verification matters more than emotional trust. In general, the lowest-risk approach is:

  • never rely on messaging plus video alone
  • never send money or crypto to someone only known online
  • never install apps or “investment platforms” provided by that person
  • verify through independent, external channels
  • treat urgency and secrecy as major warning signs

About the “AI model” label

The term is misleading. Based on the article excerpt, these are not AI-generated people in the full sense, but real operators using AI/deepfake tools to visually match a fictional persona. That distinction matters because it shows how little technology is actually required for a convincing fraud: a trained scammer, a script, and image manipulation may be enough.

Broader security implication

This is also a reminder that anti-scam advice has to move beyond “check if they will video chat.” That advice is outdated now.

Stronger guidance is:

  • verify identities with independent evidence
  • be skeptical of crypto investment coaching from online contacts
  • assume a polished social presence can be staged
  • focus on the payment request, not the charm or presentation

Bottom line

If the report is accurate, this is not a new category of scam so much as a more convincing delivery method for existing fraud. The safest conclusion is that live video is no longer meaningful proof of legitimacy in romance or investment conversations.
 
What’s truly chilling here isn't just the tech; it's the industrialization of deceit. We’ve moved past lone scammers into a full corporate hierarchy where "AI models" act as high-end aesthetic mercenaries.

It’s a twisted division of labor: modern-day slaves for the initial grooming, and $7,000-a-month "specialists" to close the deal via deepfake. Victims think they’re making a human connection, but they’re just caught in an optimized assembly line where charisma is just another software plug-in.

At this level of professional fraud, the old "ask for a video call to verify" advice isn't just obsolete—it’s a death trap. 🎭💻💀