Volunteers Apply to Be Deepfake Faces for Pig-Butchering Scams
Summary
- • People worldwide are voluntarily applying to be AI face models for crypto scam operations
- • Scam workers use real-time deepfake video calls to bypass victim identity verification
- • Job ads on Telegram require up to 150 deepfake video calls per day on 6-month contracts
- • Voluntary recruitment is distinct from the human trafficking that drives broader scam compound operations
Details
Voluntary deepfake face models recruited via Telegram for organized scam operations
People from Turkey, Russia, Ukraine, Belarus, and multiple Asian countries are applying for 'AI face model' jobs posted on Telegram. These roles involve providing their likeness for real-time deepfake video calls used in pig-butchering scam operations — a voluntary participation layer distinct from the trafficked workers who dominate these compounds.
Real-time face-swapping AI deployed to defeat victim identity verification via video call
Scam operators build fake personas using stolen images of attractive people, then switch to live deepfake video calls when victims request video proof. Dedicated 'AI rooms' within scam compounds house workers making these calls, with face-swap software rendering the operator's face as the fake persona in real time — directly countering the common advice to demand a video call to verify identity.
Job postings require up to 150 deepfake video calls per day per worker
Recruitment ads specify high call volumes — one posting lists approximately 100 video calls per day, others up to 150. Workers must also send photos daily and create audio and video messages. Contracts are typically six months in length, indicating structured, sustained operations rather than ad hoc activity.
At least 24 Telegram channels identified posting AI face model job listings
Cybercrime investigator Hieu Minh Ngo of Vietnamese nonprofit ChongLuaDao identified approximately 24 channels carrying these listings. The Humanity Research Consultancy, an anti-human-trafficking organization, has independently tracked voluntary applicants, suggesting the scale is organized and growing.
Pig-butchering scam industry holds thousands of trafficking victims captive in Southeast Asian compounds
The broader industry from which this new AI recruitment layer emerges is a multibillion-dollar criminal enterprise. Trafficked workers — lured with fake job offers — are forced to run romance and crypto investment scams from compounds in Cambodia and neighboring countries. The voluntary AI face model recruitment adds a technical deception capability while remaining operationally distinct from the trafficking component.
Deepfake face model roles normalize complicity in fraud as an informal job category
The semi-professional nature of applications — applicants listing 'AI model' experience on resumes, advertising multilingual skills, posting recruitment videos — suggests some participants may not fully grasp they are enabling large-scale fraud. This normalization, combined with accessible AI tools, risks expanding the pool of willing participants globally.
Security Alert = active threat or fraud mechanism, New Tech = AI capability enabling harm, Stat = quantitative operational detail, Industry Update = organizational development, Context = background on the broader scam ecosystem, Insight = analytical observation on implications
What This Means
Pig-butchering scam operations have added a new layer to their infrastructure: voluntary workers from multiple countries lending their faces to real-time deepfake video calls, allowing fraudsters to defeat the video verification check that victims increasingly rely on. This means the common advice to "ask for a video call" is no longer a reliable safeguard against romance or crypto investment scams. The combination of industrialized fraud compounds, accessible face-swapping AI, and a growing pool of willing participants globally represents a significant escalation in the sophistication and reach of these criminal enterprises, with direct implications for consumer protection and AI misuse policy.
