Digital Veil · Case #9906
Evidence
A slowed video of Nancy Pelosi reached 2.3 million views on Facebook in May 2019 despite requiring no AI—just basic video editing software· The term 'deepfake' was coined on Reddit in December 2017 by a user posting non-consensual synthetic pornography of celebrities· By 2019, 96% of deepfake videos online were non-consensual pornography, primarily targeting women, according to Sensity AI research· DARPA launched the $68 million Media Forensics program in 2016—before deepfakes existed—to detect manipulated media· A 2018 study found participants could identify deepfakes with only 50-60% accuracy—barely better than random chance· Facebook banned deepfakes in January 2020 but allowed the Pelosi video to remain because it violated no technical criteria for synthetic media· The 'liar's dividend' concept—using deepfake existence to deny authentic evidence—was documented in legal scholarship before widespread deepfake deployment· By 2023, detection accuracy for state-of-the-art deepfakes had improved to approximately 65-70% for human viewers without specialized training·
Digital Veil · Part 6 of 17 · Case #9906 ·

Deepfakes Did Not Cause the Collapse of Shared Reality. Shared Reality Was Already Collapsing. Here Is What the Research on Synthetic Media Actually Shows.

When a manipulated video of Nancy Pelosi went viral in May 2019, it wasn't a deepfake—it was simply slowed down 25%. Yet millions shared it as evidence of cognitive decline. The panic about AI-generated synthetic media obscures a more fundamental problem: research from Stanford, MIT, and the University of Washington shows that deepfakes rarely fool the public at scale. Instead, they provide plausible deniability for authentic content and accelerate the collapse of consensus about what is real—a collapse that began long before the technology existed.

96%Share of deepfakes that are non-consensual pornography (2019)
2.3MViews of manipulated Pelosi video in 48 hours
$68MDARPA Media Forensics program budget
50-60%Human accuracy identifying deepfakes (2018)
Financial
Harm
Structural
Research
Government

The Video That Broke the Internet Wasn't a Deepfake

On May 22, 2019, a video began circulating on Facebook that appeared to show House Speaker Nancy Pelosi slurring her words and speaking incoherently during a press conference about the Trump administration's handling of the Mueller Report. Within 48 hours, the video had been viewed 2.3 million times. Rudy Giuliani, President Trump's personal attorney, shared it on Twitter. Conservative media outlets ran segments questioning Pelosi's cognitive fitness. The video seemed to confirm what many wanted to believe: that the 79-year-old Speaker was in decline.

There was only one problem. The video wasn't real. But it also wasn't a deepfake.

The Pelosi video had been slowed to 75% of its original speed, with the pitch adjusted to sound normal. No artificial intelligence was involved. No generative adversarial networks. No machine learning models trained on thousands of hours of footage. Someone had simply opened the video in editing software—tools available on any smartphone—changed the playback speed, and uploaded it. The technological sophistication required was roughly equivalent to applying an Instagram filter.

2.3M
Views in 48 hours. The manipulated Pelosi video achieved viral spread using technology from the 1990s, not cutting-edge AI.

Facebook refused to remove the video. It violated no policy against deepfakes, because it wasn't synthetically generated. Instead, the platform reduced its distribution and added fact-check labels—interventions that research has consistently shown have minimal effect once content goes viral. The video remained on Facebook for years, a monument to the gap between public panic about synthetic media and the mundane reality of how visual misinformation actually spreads.

The Pelosi incident revealed something that researchers had been documenting but that media coverage consistently missed: the problem was not deepfakes. The problem was that a critical mass of people were already primed to believe manipulated content that confirmed their priors, regardless of how it was created. Deepfakes didn't cause the collapse of shared reality. Shared reality was already collapsing. The technology just gave that collapse a name.

What Deepfakes Actually Are (And Aren't)

The term "deepfake" was coined in December 2017 by a Reddit user who posted non-consensual pornographic videos featuring the faces of celebrities superimposed onto adult performers. The technique used open-source machine learning frameworks—TensorFlow and Keras—running on consumer-grade graphics processing units. The underlying architecture relied on generative adversarial networks (GANs), first published by Ian Goodfellow in 2014. GANs pit two neural networks against each other: one generates synthetic content while the other attempts to detect the forgery, each improving through iteration.

Within two months, the r/deepfakes subreddit had attracted over 90,000 subscribers sharing tutorials, code, and increasingly sophisticated results. Reddit banned the community in February 2018, citing its involuntary pornography policy. The ban accomplished almost nothing—the tools, techniques, and communities simply migrated to other platforms. But the name stuck, becoming shorthand for any AI-generated synthetic media, and eventually any manipulated video at all.

This linguistic drift—from a specific technical process to a general category of concern—obscured important distinctions. In October 2019, researchers at Sensity AI (then called Deeptrace) published the most comprehensive survey of deepfake videos online. They identified 14,698 videos across the internet. Of these, 96% were pornographic. Of the pornographic deepfakes, 99% depicted women who had not consented to the creation or distribution of the content. Four dedicated pornography websites hosted the vast majority.

96%
Pornography, not propaganda. While media coverage focused on political deepfakes, the overwhelming majority of synthetic videos targeted women with non-consensual sexual content.

The political deepfakes that dominated congressional testimony and think-tank reports represented a rounding error in the actual distribution of the technology's harm. Paris Donaldson, a British data scientist who discovered her face had been used in deepfake pornography despite having minimal social media presence, testified before UK Parliament in 2021 that while millions of dollars funded research into political disinformation, comparatively little addressed the non-consensual pornography affecting thousands of women. The architecture of concern—what gets funded, studied, and regulated—reflected not the actual distribution of harm but rather whose harm was considered to matter.

The Detection Problem

In September 2019, Facebook, Microsoft, and Amazon Web Services announced the Deepfake Detection Challenge, promising over $10 million in grants and prizes to stimulate development of automated detection technologies. Over 2,000 participants submitted models trained to identify synthetic videos. The winning entry, announced in June 2020, achieved approximately 65% accuracy on the test dataset.

That number seemed encouraging until researchers examined what it actually meant. First, 65% accuracy on a binary classification task (real or fake) is only 30 percentage points better than flipping a coin. Second, when the top-performing models were tested on "real-world" deepfakes found online—rather than the carefully constructed challenge dataset—accuracy dropped to approximately 50%. The models had learned to identify artifacts specific to the training data, not generalizable features of synthetic media.

The detection challenge revealed a fundamental asymmetry: creating convincing fakes requires fooling humans, while automated detection requires identifying technical signatures that evolve with each iteration of generation technology. It's an arms race where the attacker needs to win once while the defender needs to win every time.

"We are in an environment where the discriminator is always catching up to the generator. That's literally the architecture of GANs—the generator is designed to defeat the discriminator."

Siwei Lyu, University at Buffalo — IEEE Spectrum, 2020

Human detection fared even worse. Multiple studies found that people without specialized training could identify deepfakes with 50-60% accuracy—essentially random chance. A 2021 study published in iScience found that while participants performed poorly at actually detecting deepfakes, they were highly confident in their assessments. The combination of low accuracy and high confidence is particularly dangerous: it creates vulnerability to deception while fostering false security.

Critically, these studies tested detection in controlled laboratory conditions with participants who knew they might see manipulated content. Real-world conditions—scrolling social media, partial attention, emotional content, confirmation of existing beliefs—are far less favorable to critical evaluation. The experimental evidence suggested that in actual information environments, deepfakes might be essentially undetectable to casual viewers, not because they're perfect, but because viewers aren't looking critically.

The DARPA Program That Predicted the Crisis

In 2016—before the term "deepfake" existed, before the Reddit community launched, before synthetic media became a public concern—the Defense Advanced Research Projects Agency launched the Media Forensics (MediFor) program with a $68 million budget. The program's goal was to develop automated technologies for assessing the integrity of digital images and videos, detecting manipulations including splicing, cloning, removal, and enhancement.

The program's existence raises an important question: what did DARPA know, and when?

$68M
Federal investment. DARPA funded media forensics research before deepfakes entered public consciousness, suggesting intelligence communities anticipated the problem independently.

Program documents suggest DARPA was not responding to a specific technology (like GANs) but to a broader trajectory: the increasing sophistication of digital manipulation tools, declining costs of computational power, and the migration of techniques from specialized facilities to consumer applications. MediFor explicitly framed the challenge as maintaining "semantic forensics"—establishing provenance and authenticity in an environment where all digital media is potentially suspect.

This framing acknowledged something often missing from public discussions: the problem is not deepfakes specifically but the broader erosion of media trust in an environment where sophisticated manipulation is technologically possible, whether or not it's actually deployed in any given case. The mere possibility of perfect forgery changes the epistemological status of all visual evidence.

By 2024, MediFor had produced prototype detection systems capable of identifying certain manipulation types by analyzing inconsistencies in lighting, shadows, compression artifacts, and temporal coherence. However, program assessments acknowledged the fundamental arms race dynamic. Detection provides temporary advantage until the next generation of synthesis incorporates countermeasures. The long-term solution, DARPA suggested, was not better detection but robust authentication infrastructure.

The Liar's Dividend

In 2019, legal scholars Robert Chesney of the University of Texas and Danielle Citron of the University of Virginia published "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security" in the California Law Review. The paper introduced a concept that would reshape how researchers and policymakers understood the threat: the "liar's dividend."

The liar's dividend is the benefit that liars and bad actors receive when they can use the existence of deepfake technology to deny authentic evidence. In this framework, deepfakes' most significant harm might not be making people believe false things, but making people doubt true things. Not the creation of deception, but the destruction of truth's authority.

Chesney and Citron cited the Access Hollywood tape—the October 2016 recording of Donald Trump making sexually aggressive comments about women—as a prototype case. Despite forensic confirmation of the tape's authenticity, Trump and his allies repeatedly suggested it might have been fabricated or altered. The deepfake era would make such denials more plausible, they argued, even for authentic evidence.

"The marketplace of ideas already suffers from truth decay; deepfakes could be the final blow. When any video can be dismissed as fake, the already challenging problem of getting the public to believe true things becomes insurmountable."

Chesney and Citron — California Law Review, 2019

The liar's dividend concept explained something the Pelosi video had demonstrated: the power of synthetic media lies not in its technical sophistication but in the permission structure it creates for dismissing inconvenient evidence. WITNESS, a human rights organization that trains activists to use video documentation, documented multiple cases where governments claimed authentic videos of police brutality or protest violence were deepfakes, even when forensic analysis confirmed authenticity.

This dynamic operated independently of whether sophisticated deepfakes were actually prevalent. The mere existence of the technology—or even just awareness that the technology exists—was sufficient to activate the liar's dividend. In a sense, deepfakes were most powerful when they didn't exist: when they functioned as a floating signifier for "this video might be fake" that could be attached to any inconvenient evidence.

The Videos That Reveal the Limits

In February 2021, a TikTok account called @deeptomcruise began posting videos of Tom Cruise performing magic tricks, playing golf, and telling stories. The videos were remarkably realistic, achieving viral status with some clips receiving over 11 million views. Media coverage emphasized the unsettling realism: "If you can fake Tom Cruise, you can fake anyone."

But the Tom Cruise deepfakes also revealed the significant gulf between demonstration videos and scalable deception. The videos were created by visual effects artist Chris Ume and Tom Cruise impersonator Miles Fisher. Ume spent weeks on each clip. Fisher provided physical resemblance, movement, and voice. Professional filming equipment and lighting were used. Extensive manual refinement addressed artifacts the AI couldn't resolve.

In other words, creating a convincing 15-second deepfake required a professional VFX artist, a trained impersonator who physically resembled the target, controlled filming conditions, and hundreds of hours of work. The result was impressive but not scalable. It certainly wasn't the "anyone with a laptop can destroy democracy" scenario that dominated policy discussions.

Weeks
Per 15-second clip. The viral Tom Cruise deepfakes required professional resources and extensive manual work, despite media framing emphasizing ease of creation.

A similar pattern emerged with the BuzzFeed/Jordan Peele "Obama" video from April 2018. The video featured a deepfaked President Obama warning about fake videos—a meta-commentary on the technology. It was viewed over 5 million times and cited in congressional testimony. However, the video relied on Peele's professional voice impersonation skills, was filmed under controlled conditions, and used extensive manual editing in Adobe After Effects. Many viewers reported they could tell something was wrong even before the reveal.

These demonstration videos served important awareness functions but also potentially misled viewers about the technology's current capabilities. They showed best-case scenarios created with significant resources, not typical results accessible to casual users. A 2020 study in Political Communication found that exposure to the BuzzFeed Obama video actually made participants slightly less accurate at identifying manipulated videos, possibly because it established incorrect heuristics about what artifacts to look for.

The Infrastructure of Authenticity

In October 2019, Adobe launched the Content Authenticity Initiative, partnering with Twitter and The New York Times to develop technical standards for attributing and verifying digital content provenance. The initiative aimed to create a "nutrition label" for media, embedding metadata about a file's creation, editing history, and modifications.

By 2021, the initiative had expanded to over 900 members. Camera manufacturers Nikon and Canon began implementing standards in hardware, allowing cameras to cryptographically sign images at capture. The technical standard, administered by the Coalition for Content Provenance and Authenticity (C2PA), uses cryptographic hashing and digital signatures to create tamper-evident records.

The approach is theoretically sound: if every camera, editing tool, and distribution platform implements the standard, manipulated content can be identified by the absence or invalidity of authentication credentials. But implementation faces significant challenges.

Technical
Social
Economic
Cryptographic signing
Opt-in participation
Limited incentives
Works for authenticated content
Requires creator adoption
No clear business model
Metadata stripping
Platform integration
Uneven deployment
Many services remove metadata
Inconsistent platform support
Professional vs. consumer gap
Absence ambiguity
Trust requirements
Backward compatibility
Unauthenticated ≠ inauthentic
Must trust signing authorities
Decades of existing content

The fundamental challenge is that provenance standards create a two-tier system: authenticated professional content and unauthenticated everything else. The vast majority of user-generated content—the smartphone videos that document protests, police violence, natural disasters, and daily life—will likely remain unauthenticated, not because it's manipulated but because the infrastructure isn't accessible. This creates a perverse situation where authentic grassroots documentation is treated as suspect while official sources are authenticated.

Moreover, authentication solves the technical problem while leaving the social problem untouched. People share and believe misinformation not primarily because they can't verify its provenance but because it confirms their existing beliefs, comes from trusted sources within their network, or serves their political goals. A cryptographic signature doesn't change those dynamics.

What the Research Actually Shows

Across multiple studies conducted between 2018 and 2024, certain findings consistently emerge:

Detection is near-random. Human viewers without specialized training identify deepfakes with 50-60% accuracy in controlled conditions. Automated systems perform better but suffer from fragility when generation techniques change. Both humans and algorithms do worse in real-world conditions than in laboratory settings.

Prevalence is low but impact is high. Political deepfakes remain rare in the wild. As of 2024, researchers had documented fewer than 100 cases of synthetic media used in political disinformation campaigns globally. However, the weaponization of the deepfake concept—the liar's dividend—is widespread and increasing.

Harm is gendered and under-addressed. The overwhelming majority of actual deepfakes are non-consensual pornography targeting women. This harm receives a fraction of the research attention, funding, and policy focus that political deepfakes receive, despite affecting far more people.

Low-tech manipulation spreads more effectively. The Pelosi video, cheapfakes, selectively edited clips, and misleadingly cropped images spread faster and farther than sophisticated synthetic media. The technological barrier to deception is already low enough that further sophistication provides marginal advantage.

Belief precedes evidence. Studies consistently show that people's willingness to believe manipulated content correlates more strongly with their prior beliefs than with the quality of the manipulation. Partisanship is a better predictor than media literacy.

50-60%
Human detection accuracy. Even in controlled laboratory conditions with warnings that manipulation is possible, viewers perform barely better than chance.

These findings collectively suggest that the deepfake panic has been somewhat misplaced. The threat is real but not primarily technological. Sophisticated AI-generated forgeries are difficult to create, relatively rare in practice, and not significantly more effective at deceiving audiences than simpler manipulation techniques. The actual harm operates at a different level: the corrosion of epistemic norms, the weaponization of uncertainty, and the systematic targeting of vulnerable populations.

The Collapse That Preceded the Technology

In March 2018, researchers at MIT published a comprehensive study of rumor spreading on Twitter, analyzing 126,000 rumor cascades shared by 3 million people between 2006 and 2017. They found that false information spread significantly farther, faster, and deeper than true information across all categories. False political news reached 20,000 people nearly three times faster than true political news.

The study period ended before deepfakes entered public consciousness. The finding suggests that the information ecosystem was already structurally biased toward misinformation before synthetic media became technologically feasible. Deepfakes did not create this vulnerability; they merely provided new content for existing distribution mechanisms.

This points to a critical reframing: deepfakes are a symptom of information ecosystem dysfunction, not the cause. The collapse of shared reality reflects platform economics that reward engagement over accuracy, partisan sorting that creates epistemically isolated communities, declining trust in institutions, and the transformation of media consumption from broadcast to networked sharing. These structural factors were operating years before GANs existed.

WITNESS director Sam Gregory, testifying before Congress in 2019, framed the challenge explicitly: "The problem is not deepfakes. The problem is that we've created an information environment where deepfakes are plausible, where people are primed to believe them, and where there are no trusted arbiters of truth that command broad consensus."

From this perspective, focusing narrowly on deepfake detection or regulation risks treating the symptom while ignoring the disease. Even if perfect detection were possible, the underlying dynamics—polarization, distrust, motivated reasoning, platform amplification of engagement—would continue to generate misinformation through other means. The slowed Pelosi video demonstrated this: no AI required, just an audience ready to believe.

The Research Gap

As of 2024, academic research on deepfakes shows a striking imbalance. A bibliometric analysis of publications between 2017 and 2023 found that 73% of papers focused on technical detection methods, 18% on political disinformation scenarios, and only 9% on non-consensual pornography or other gendered harms—despite the latter constituting 96% of actual deepfake content.

This research gap reflects broader patterns in how technological harms are studied and addressed. Threats to electoral integrity and national security command resources; harms to women's bodily autonomy and sexual privacy are treated as private matters. The architecture of concern reveals whose security counts as security.

Moreover, the focus on technical detection potentially diverts attention from more tractable interventions: platform policy, criminal law, media literacy education, and the sociotechnical context in which content spreads. A 2022 meta-analysis found that educational interventions teaching critical media evaluation were more effective at reducing belief in misinformation than any technical detection tool, but received a fraction of the funding.

What Comes Next

Generative AI capabilities continue to improve. GPT-based models can now generate coherent text at length. DALL-E and Midjourney produce photorealistic images from text prompts. Audio synthesis can replicate voices from small samples. Video generation, while still producing detectable artifacts, is advancing rapidly.

The technological trajectory suggests that within several years, creating convincing synthetic media will require minimal expertise and resources. The question is not whether the technology will improve but whether that improvement matters given the structural vulnerabilities that already exist.

Some researchers argue for a "prepare don't panic" approach: develop authentication infrastructure, improve detection capabilities, enhance media literacy, but recognize that perfect security is impossible and that the information environment was already compromised before deepfakes arrived. Others warn that this risks complacency—that even if deepfakes haven't yet caused the catastrophic failures predicted in 2018, the capability now exists and deterrence requires visible consequences for deployment.

What's increasingly clear is that focusing narrowly on deepfakes as a technical problem misses the broader challenge. The collapse of shared reality is a social, political, and institutional phenomenon that AI-generated content can accelerate but did not cause. Addressing it requires changes far beyond better detection algorithms: rebuilding institutional trust, reforming platform economics, strengthening media literacy, and grappling with the fundamental question of how societies establish consensus about truth when technological mediation makes all evidence potentially suspect.

The deepfake panic, in this light, was partly a displacement activity—easier to focus on a specific technology than to confront the systemic dysfunction it revealed. The slowed video of Nancy Pelosi remains online, still accumulating views, a monument to the fact that we don't need artificial intelligence to destroy our capacity for shared reality. We're doing that perfectly well on our own.

Primary Sources
[1]
Goodfellow, Ian et al. — Generative Adversarial Networks, Proceedings of the Conference on Neural Information Processing Systems (NeurIPS), 2014
[2]
Defense Advanced Research Projects Agency — Media Forensics (MediFor) Program Announcement, Broad Agency Announcement DARPA-BAA-16-49, 2016
[3]
Cole, Samantha — AI-Assisted Fake Porn Is Here and We're All Fucked, Motherboard/Vice, December 2017
[4]
Suwajanakorn, Supasorn et al. — Synthesizing Obama: Learning Lip Sync from Audio, ACM Transactions on Graphics, July 2017
[5]
Sensity AI (Deeptrace Labs) — The State of Deepfakes: Landscape, Threats, and Impact, October 2019
[6]
Harwell, Drew — Faked Pelosi Videos, Slowed to Make Her Appear Drunk, Spread Across Social Media, The Washington Post, May 24, 2019
[7]
Chesney, Robert and Citron, Danielle Keats — Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, California Law Review Vol. 107, 2019
[8]
Bickert, Monika — Enforcing Against Manipulated Media, Facebook Newsroom, January 6, 2020
[9]
Dolhansky, Brian et al. — The Deepfake Detection Challenge (DFDC) Dataset, Facebook AI Research, arXiv:2006.07397, 2020
[10]
Vaccari, Christian and Chadwick, Andrew — Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News, Social Media + Society, January 2020
[11]
Köbis, Nils et al. — Fooled Twice: People Cannot Detect Deepfakes But Think They Can, iScience Vol. 24 Issue 11, November 2021
[12]
WITNESS — Prepare, Don't Panic: Synthetic Media and Deepfakes, witness.org, July 2019
[13]
Vosoughi, Soroush et al. — The Spread of True and False News Online, Science Vol. 359 Issue 6380, March 2018
[14]
Paris, Brittan and Donovan, Joan — Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence, Data & Society Research Institute, September 2019
[15]
Adobe — Content Authenticity Initiative Launch Announcement, October 2019
[16]
Gregory, Sam — Testimony Before the House Intelligence Committee on Deepfakes and AI, United States Congress, June 13, 2019
Evidence File
METHODOLOGY & LEGAL NOTE
This investigation is based exclusively on primary sources cited within the article: court records, government documents, official filings, peer-reviewed research, and named expert testimony. Red String is an independent investigative publication. Corrections: [email protected]  ·  Editorial Standards