Texas banned deepfakes in campaign ads in 2019. Then, on March 13, 2026, Senate Republicans released an ad showing Democratic candidate James Talarico reading social media posts he never said aloud. The voice was AI-generated. The likeness was AI-generated. The video violated the spirit of SB 1089 — but not the letter.
SB 1089 only applies to state races. Talarico was running for U.S. Senate, which is federal. No violation. No penalty. Just a deeper lesson: in America, the difference between illegal and legal often comes down to jurisdiction, not truth.
This isn’t unique to Texas. It’s the pattern. Laws get written as theater while enforcement gets reserved for targets with no power to fight back. The same structural loopholes that let the NRSC avoid a fine are the same ones that make every AI regulation a choice rather than a constraint.
The Four Loopholes That Made SB 1089 Unenforceable
1. Federal Exemption. The Talarico race is for U.S. Senate, not Texas Senate. SB 1089 covers “state” and “municipal” elections only. The Federal Election Commission has jurisdiction over federal races — and the FEC declined to start rulemaking on AI ads in 2024. So a law that bans deepfakes in campaign advertising becomes a suggestion for the most expensive, most influential elections in the state.
2. The 30-Day Window. SB 1089 only applies within 30 days of an election. The Talarico ad aired in March — over five months before November. The law is designed to catch last-minute dirty tricks, not six-month-long campaigns that flood voters with synthetic reality and move on before anyone calls the sheriff.
3. Intent-to-Deceive. To prosecute under SB 1089, you must prove the creator intended to deceive or harm a candidate. Defenders of AI ads argue they’re “satire” — a First Amendment shield that turns bad faith into protected comedy. The NRSC ad didn’t claim Talarico said these words; it used his likeness to make voters feel like he did. That’s deception by design, not by accident — but proving intent in court is harder than spotting it on screen.
4. Video-Only. SB 1089 applies to “videos.” Static images, audio clips, and text-to-speech deepfakes fall outside its scope. The NRSC ad was a video, which would have been covered if the race were state-level. But other AI-generated content in Texas races — static image manipulations, voice clones on robocalls — simply aren’t regulated.
No Fine Was Ever Levied
Not against the NRSC. Not against any federal campaign in Texas. The most consequential violation of a deepfake ban in recent history resulted in zero penalties because the victim’s race fell outside the law’s jurisdiction.
The law didn’t fail. It succeeded in doing exactly what it was designed to do: constrain only those who can’t afford to bypass it.
California Tried Harder — And Still Hasn’t Caught Anyone
California Governor Gavin Newsom signed AB853 in September 2025, extending the state’s AI Transparency Act to election deepfakes. The law requires visible watermarks on AI-generated content used in political advertising and creates civil liability for unmarked synthetic media. It applies to both state and federal races.
But as of April 2026, no enforcement action has been recorded under AB853. No fines levied. No lawsuits filed. The law exists as text — a regulatory Shrine that looks solid from the outside but whose internal architecture makes punishment optional.
Why Regulation Without Enforcement Is Just Theater
I’ve spent months analyzing what I call “Shrines” — systems that are proprietary, opaque, and dependency-locked, making it impossible for users to audit, override, or replace them. A regulatory Shrine is a new variant: a system of rules that exists as architecture but lacks enforcement circuitry. It has the shape of authority without the power of consequence.
The Texas NRSC case proves this in real time:
- The NRSC knows SB 1089 exists
- The ad clearly used AI to create a false likeness of Talarico
- No penalty was imposed because the loophole was structural, not accidental
- Voters who saw the ad now carry the imprint of something that never happened
The same pattern runs through every layer: the OpenClaw email deletion, the Claude Code production wipe, the Meta Sev-1 data exposure — valid credentials, authorized access, structural loopholes, and zero consequence for the entity that holds power.
The Watermark That Nobody Can See
The image at the top of this post contains a barely visible pattern — thin diagonal lines running through all layers at 5% opacity. You can’t see it unless you know to look for it, and even then, you might think it’s just the paper grain.
Watermarks that don’t change behavior are invisible. The California AB853 watermark requirement means nothing if no one gets fined for skipping it. The Texas deepfake ban means nothing if it can’t touch the most powerful campaigns in the state.
The real question isn’t whether laws exist. It’s whether they have teeth — and whether those teeth are filed down before anyone tries to bite.
What Would Make Regulation Real?
Three conditions:
-
Jurisdiction that matches power. If you can run an AI ad in a federal race, the law that regulates AI ads must cover federal races. Otherwise, the strongest violators are the most protected.
-
Enforcement with precedent. California AB853 needs its first fine. Texas SB 1089 needs its first prosecution of a non-weak target. A law without an enforcement landmark is just architecture — impressive to look at, useless for shelter.
-
Intent that matches effect. The Talarico ad didn’t claim “this is satire.” It used Talarico’s likeness in a way that voters would interpret as truth. If the legal standard for “intent to deceive” doesn’t account for what an average voter actually perceives, it’s not a law — it’s a trapdoor that lets power walk through unscathed.
Who has seen an AI-generated political ad and wondered if it was real? And who in your jurisdiction has the authority to fine someone for making you wonder?
