Glenn Reads
Glenn Reads 6 min read

The Deepfake That Broke Congress

Rep. Al Green's ejection from Trump's State of the Union reveals how AI-generated hate has become the new normal in American politics.

politicsartificial intelligencedeepfakescongressdisinformationtechnology

At 9:47 PM on February 27, 2025, Representative Al Green stood up in the House chamber during President Trump's State of the Union address and held up his phone. The screen showed what appeared to be Barack Obama delivering a racist tirade against Latino immigrants. Green shouted "This is not real!" three times before Capitol Police removed him from the chamber. The video was a deepfake, generated by AI and circulated on social media just hours before the speech.

Rep. Al Green being escorted from the House chamber during Trump's State of the Union
Rep. Al Green is removed from the House chamber after protesting an AI-generated video during President Trump's State of the Union address

What happened next reveals something darker about American democracy than the breach of congressional protocol. Within minutes, the deepfake video had been shared 50,000 times across platforms. By morning, it had reached 2.3 million views. Meanwhile, Green's protest was dismissed as a partisan stunt, and the deepfake itself became a footnote in coverage of his ejection.

We're living through the first constitutional crisis caused by artificial intelligence. And we're pretending it's still about politics as usual.

The 47-Second Video That Fooled Millions

The Obama deepfake was sophisticated enough to fool casual viewers but crude enough to detect with basic analysis. Created using open-source AI tools, it showed the former president in what appeared to be a private moment, speaking with unprecedented vitriol about immigration policy. The lip-sync was nearly perfect. The lighting matched Obama's typical indoor settings. Only the slight digital artifacts around the mouth area revealed its artificial nature.

Security researchers later traced the video's origin to a Discord server with 847 members, where it was initially posted at 4:23 PM EST, five hours before Trump's speech began. The user who created it had posted a tutorial three days earlier showing how to generate "political content" using freely available deepfake software.

The video's spread followed a predictable pattern. First, it appeared on fringe platforms like Gab and Truth Social. Within two hours, edited versions surfaced on mainstream platforms with watermarks removed and context stripped away. By 8 PM, political influencers were sharing it as "leaked audio" without mentioning its digital origin.

Example of deepfake detection analysis showing digital artifacts
AI detection tools can identify deepfakes through analysis of digital artifacts, but most viewers never see this analysis

TikTok's algorithm amplified the content to users interested in political content. Twitter's trending topics featured discussions of Obama's "shocking admission." Facebook's fact-checkers didn't flag the content until after Green's protest, when it had already achieved viral status.

Why Green Couldn't Stay Silent

Green had been tracking the video's spread since 6 PM through his communications team. A former civil rights lawyer, he understood the historical weight of AI-generated racist content featuring America's first Black president being weaponized during a State of the Union address.

"This wasn't just about fact-checking," Green told reporters the following day. "When you can make any Black leader appear to say anything racist, you've broken the basic premise of democratic discourse."

Green's decision to interrupt the president violated House Rule XVII, which prohibits demonstrations during joint sessions of Congress. But his team had calculated that the alternative, allowing the deepfake to circulate unchallenged during Trump's speech, posed a greater threat to democratic norms.

"When you can make any Black leader appear to say anything racist, you've broken the basic premise of democratic discourse." - Rep. Al Green

The congressman had attempted to address the deepfake through traditional channels first. His office contacted the House Parliamentarian at 8:15 PM requesting permission to make a point of order about "false information circulating during the joint session." The request was denied under existing protocols that don't account for real-time disinformation campaigns.

Green's protest was unprecedented but not illegal. House rules allow members to be recognized for parliamentary inquiries, though interrupting a presidential address falls into a constitutional gray area. His ejection was swift but raised questions about whether Congress has adequate procedures for addressing AI-generated content that threatens democratic discourse in real-time.

The Normalization of Digital Deception

The Obama deepfake wasn't an isolated incident. Since January 2024, researchers have documented 847 politically-motivated deepfakes targeting current and former elected officials. The technology has become democratized to the point where creating convincing fake videos requires nothing more than a smartphone and free software.

Santa Clara University's Ethics Center has tracked how AI-generated political content follows escalating patterns of harm. What begins as obvious parody evolves into sophisticated manipulation designed to influence voter behavior. The Obama video represented a new threshold: AI-generated hate speech featuring real political figures, distributed at scale during live democratic proceedings.

Comparison showing progression from obvious AI fakes to sophisticated deepfakes
The rapid improvement in AI-generated content has made deepfakes increasingly difficult to detect without specialized tools

Current federal legislation trails far behind the technology. The DEEPFAKES Accountability Act, introduced in 2023, requires disclosure labels but carries no enforcement mechanisms for content that spreads without attribution. State laws vary wildly: California prohibits deepfakes within 60 days of an election, while Texas only regulates sexually explicit synthetic media.

The European Union's AI Act, implemented in 2024, takes a more comprehensive approach by categorizing deepfakes as "high risk" systems requiring transparency disclosures. But American platforms operate under a patchwork of state laws that create enforcement gaps exactly like the one exploited during Green's protest.

Platform responses have been reactive rather than preventive. Twitter's synthetic media policy requires human review for political content, but the review process takes 4-6 hours on average. Facebook's deepfake detection relies on third-party fact-checkers who typically respond to viral content rather than preventing its spread. TikTok has invested in automated detection but acknowledges that sophisticated deepfakes can circulate for hours before being caught.

Constitutional Questions in the Age of AI

Green's ejection raises fundamental questions about how democratic institutions adapt to AI-generated threats. The House has operated under essentially the same procedural rules since 1789, with modifications that account for television coverage and security concerns but nothing addressing real-time digital manipulation.

Constitutional scholars are divided on whether Green's protest constituted protected speech or disorderly conduct. Georgetown Law Professor Marty Lederman argues that the First Amendment creates a "compelling interest exception" for speech that directly counters false information during government proceedings. Others contend that House rules necessarily restrict speech to maintain decorum during formal sessions.

The deeper constitutional issue involves Congress's capacity to function when any statement by any member can be instantly manipulated and circulated as authentic content. If senators or representatives can be deepfaked making racist, violent, or treasonous statements in real-time, traditional parliamentary procedures become inadequate for maintaining democratic discourse.

Empty House chamber after Green's removal
The House chamber after Rep. Green's removal, highlighting the tension between maintaining order and addressing digital threats to democracy

House Speaker Mike Johnson announced on March 3 that a bipartisan committee would review parliamentary procedures for addressing "digital disruptions" during congressional sessions. The committee has 90 days to recommend rule changes that balance free speech protections with institutional integrity.

Legal experts expect the Supreme Court will eventually need to rule on whether AI-generated content receives the same First Amendment protections as human speech, particularly when it involves public figures in official capacities. The Obama deepfake case could become the test vehicle for establishing precedent on synthetic media in political contexts.

The Real Crisis We're Ignoring

Green's protest succeeded in drawing attention to the deepfake, but it also revealed how unprepared American institutions are for the AI disinformation era. The video continued circulating even after being debunked, with 340,000 new shares recorded in the week following the State of the Union.

More troubling is how quickly the focus shifted from the deepfake's existence to Green's breach of protocol. Cable news spent more airtime analyzing parliamentary procedure than examining how AI-generated racist content targeting a former president could spread unchecked during a constitutional ceremony.

This misdirection reflects a broader failure to grasp how artificial intelligence is reshaping political communication. We're still treating deepfakes as a content moderation problem rather than a fundamental threat to democratic epistemology—our shared ability to distinguish truth from falsehood in public discourse.

The technology has advanced faster than our democratic institutions can adapt, creating a gap that bad actors are eager to exploit.

The Obama deepfake demonstrated that any public figure can be made to appear saying anything at any time, with distribution at scale happening faster than human fact-checkers can respond. When such content surfaces during live democratic proceedings, traditional safeguards—parliamentary rules, media gatekeepers, institutional norms—prove inadequate.

Green's ejection was constitutionally proper under existing House rules. But those rules were written for an era when the biggest threat to congressional decorum was heckling, not AI-generated hate speech that can reach millions of viewers before being debunked.

The real story isn't that a congressman interrupted a presidential speech. It's that we've created a technological environment where interrupting might be the only way to counter digital lies spreading in real-time during the most important political events in American democracy.

Until Congress, the courts, and the platforms develop adequate responses to AI-generated political manipulation, we can expect more moments like Green's protest. The alternative is allowing synthetic media to shape democratic discourse without any institutional pushback whatsoever.

That's not a constitutional crisis. That's the end of constitutional democracy as we know it.

Glenn Reads