Political Deepfakes Surge, Influencing Views Even When Recognized as Fake
Summary
- • Political deepfakes surpassed 1,000 posts in early 2025 alone, matching 8 prior years combined
- • AI tools now make fabricating realistic political scenes trivially easy at scale
- • Fake personas monetized via OnlyFans while simultaneously serving as political propaganda
- • Deepfakes persuade audiences through emotional resonance even when recognized as artificial
Details
1,000+ deepfakes in early 2025 vs. 1,344 across prior 8 years
Purdue's Grail lab catalogued over 1,000 English-language political deepfake posts since the start of 2025 alone — nearly equaling the 1,344 incidents documented across the previous eight years combined.
AI makes fabrication trivially easy at unprecedented scale
Improved generative AI tools let creators place real political figures into fabricated scenes quickly and cheaply, enabling both propagandists and commercial content farmers to operate at scale previously requiring professional resources.
Fake military persona 'Jessica Foster' monetized at scale with 1M+ followers
An AI-generated woman in US military uniform amassed over 1 million Instagram followers before linking to a monetized OnlyFans account, illustrating how deepfakes can serve simultaneous commercial and ideological purposes.
White House shared 18+ deepfakes; issue spans partisan lines
Trump and the White House shared at least 18 deepfakes since 2024 per Grail database data; California Governor Newsom also shared deepfakes targeting Trump, demonstrating the practice is not limited to one party.
Persuasion persists even when viewers recognize content as fake
Researchers note deepfakes 'feel true' to audiences who know they are fabricated, with Schiff describing the phenomenon as 'blending the lines between political cartoons and reality' — undermining purely detection-based mitigation strategies.
Key dimensions of the political deepfake surge — volume statistics, enabling technology, monetization, political use, and persuasion dynamics.
What This Means
For AI practitioners and platform trust-and-safety teams, this research underscores that detection-based mitigation strategies are insufficient — persuasion effects persist even among audiences who correctly identify content as synthetic. The monetization angle, exemplified by the Jessica Foster account, means financial incentives now independently drive deepfake production outside state-level influence operations, complicating the threat model. Policy frameworks and content moderation approaches may need to shift focus from factual deception to emotional resonance as the primary harm vector.
