TL;DR
X Wins Deepfake Law Challenge: A federal judge struck down California’s law restricting AI-generated election deepfakes, siding with Elon Musk’s platform X and citing Section 230 protections. The ruling highlights how sweeping AI restrictions can run afoul of free speech rights.
Kentucky Uses AI to Bridge Divides: Warren County, Kentucky, ran the “largest town hall in America” using AI to identify consensus across 8,000 residents. The experiment shows how AI can foster democratic dialogue and reduce polarization without chilling expression.
Meta Settles AI Defamation Suit: Meta resolved a lawsuit after its AI allegedly generated defamatory claims about political activist Robby Starbuck. The settlement, which includes collaboration on bias, reflects rising concerns over AI “hallucinations” and reputational harms.
Klobuchar Presses for Deepfake Law: Senator Amy Klobuchar renewed calls for her No Fakes Act after a viral AI deepfake falsely depicted her making vulgar remarks. Mandatory takedowns and labeling risks could chill lawful political expression.
AI Surveillance Targets Protestors and Migrants: Amnesty revealed U.S. authorities used Palantir and Babel Street tools to monitor migrants and pro-Palestinian student activists. The programs risk turning lawful speech and protest into grounds for deportation and detention.
YouTube Tests AI Age Checks: YouTube is piloting AI to estimate user ages based on viewing patterns, limiting access for those flagged as under 18. While intended to protect minors, the opaque system raises questions about algorithmic profiling and access to information.
OpenAI Transitions from Blunt Refusals to Nuanced Safe Completions: GPT-5 now pushes free speech forward by offering helpful answers where possible—even on sensitive or dual-use topics—while still honoring safety constraints.
Major Stories
» X Wins Legal Challenge to California’s Election Deepfake Law
A federal judge has struck down a California law restricting AI-generated deepfakes during elections after Christopher Kohls, a content creator, and X challenged the rule.
The Law:
The law, the Defending Democracy from Deepfake Deception Act (AB 2655), would have blocked platforms from hosting “materially deceptive” AI-generated election content in the run-up to a vote.
U.S. District Judge Mendez also indicated he would strike down a related law, AB 2839, which requires labels on digitally altered political ads, on First Amendment grounds.
In Court:
Judge Mendez permanently enjoined the first law because it violates and is preempted by Section 230 of the federal Communications Decency Act, which shields platforms from liability for third-party content.
He declined to give an opinion on the plaintiff’s First Amendment arguments and said, “I’m simply not reaching that issue.”
Mendez said he would write an official opinion on the second law [AB 2839] in the coming weeks, adding it “fails miserably in accomplishing what it would like to do.”
Our Take:
In a column at MSNBC, Executive Director Jacob Mchangama argued that California’s law would “chill political speech, infringe on Californians’ ability to criticize politicians, undermine platforms’ rights to moderate content, and even prevent people from highlighting 'deceptive' content as fake.”
Jacob also cited research showing how the extent and impact of disinformation, including deepfakes, are typically much smaller than alarmist narratives assume.
» Kentucky Town Uses AI to Bridge Political Divides
A month-long “town hall” used AI to collect and synthesize feedback from nearly 8,000 residents in Warren County, Kentucky (which contains Bowling Green).
Details:
Using Pol.is, an open-source polling platform with great success in Taiwan, the county surveyed participants on what they wanted to see in their community over the next 25 years.
Sensemaker, an AI tool that analyzes large sets of online conversations, found 2,370 ideas that at least 80% of respondents could agree on and consolidated them into a policy report.
The experiment was “the largest town hall in America,” said Warren County judge executive Doug Gorman, who found broad agreement when ideas were presented anonymously and without political identity.
The survey reached people who were unable to attend town halls and was accessible in multiple languages to immigrants who may have been excluded from such conversations.
Our Take:
“There’s been a lot of talk about the dangers of AI, particularly when it comes to its effect on our political process. But this shows how we can harness this technology as an empowering tool for communication that bridges divides caused by hyperpartisanship and polarization without sacrificing free speech.” — Jacob Mchangama
» Meta Settles AI Defamation Lawsuit with Robby Starbuck
Meta has reached a settlement with political activist Robby Starbuck, who sued the company, alleging that its AI platform produced false and defamatory statements about him in response to user prompts.
Details:
Starbuck sued Meta, claiming its AI chatbot falsely accused him of taking part in the January 6, 2021, U.S. Capitol riot and being affiliated with the conspiratorial QAnon movement.
Starbuck filed the defamation suit in Delaware Superior Court, seeking over $5 million in damages.
Settlement:
As part of the settlement, Robby Starbuck will work with Meta to address “ideological and political bias” in its AI.
» Senator Amy Klobuchar Calls for Federal Deepfake Legislation
Democratic Senator Amy Klobuchar criticized the spread of a realistic AI-generated “deepfake” video that falsely depicted her making vulgar commentary, amidst a call for tougher federal anti-deepfake legislation.
Details:
The viral AI-generated video featured Klobuchar offering a vulgar critique of an American Eagle ad campaign starring Sydney Sweeney.
In a NYT op-ed, Klobuchar called on Congress to pass her No Fakes Act, which “would give people the right to demand that social media companies remove deepfakes of their voice and likeness while making exceptions for speech protected by the First Amendment.”
This includes “labeling requirements for content that is substantially generated by A.I.”
Her appeal to Congress followed from her experience in trying to remove her AI likeness.
She noted mixed responses from platforms: TikTok removed the video, Meta flagged it as AI-generated, but X (formerly Twitter) provided minimal support, merely suggesting users add context via its Community Notes feature.
Our Take:
While AI deepfakes are undoubtedly a challenge for democratic societies, FoFS’ Senior Legal Fellow Ashkhen Kazaryan and Communications Coordinator Ashley Haek have pointed out how expansive enforcement mechanisms that require platform takedowns and vague provisions within the comparable TAKE IT DOWN ACT raise substantial free expression concerns.
Forced disclosure can infringe on speakers’ autonomy by compelling them to include disclaimers they may not agree with, altering their intended message. Such mandates may also chill protected expression, as speakers might avoid using AI-generated content altogether to sidestep legal risks or public skepticism.
» AI Surveillance Used in Crackdown on Migrants and Student Protestors
Amnesty International has accused U.S. authorities of using Palantir’s Immigration OS and Babel Street’s Babel X tools to surveil migrants, refugees, and pro-Palestinian student protestors.
Background:
Amnesty’s research, based on DHS records, procurement contracts, and FOIA disclosures, shows these AI systems underpin the government’s “Catch and Revoke” initiative, which tracks foreign nationals, monitors social media, and automates visa revocations.
The crackdown comes amid President Trump’s second-term focus on deportations and his administration’s labeling of campus protests over Gaza as “antisemitic.”
Details:
Babel X conducts automated sentiment analysis and persistent social media monitoring to assign intent scores that can flag lawful speech as “radicalization.” Palantir’s Immigration OS integrates federal databases to streamline deportation, monitor “self-deportations,” and prioritize cases for removal.
Amnesty found the tools facilitate mass surveillance of students and migrants. Amnesty warns these systems could supercharge arbitrary visa revocations, detentions, deportations, and undermine rights to free expression and protest.
Cases cited include international students detained after writing opinion pieces or attending campus protests, raising serious due process and discrimination concerns.
Response:
Palantir responded that its products were not powering State Department “Catch and Revoke” operations but confirmed expanded ICE contracts tied to enforcement and deportation. Babel Street did not respond.
In response, Amnesty stated, “ICE plays a key role in enforcing the 'Catch and Revoke' policy by way of aggressive tracking and detention tactics, even if the policy as a whole is being implemented by the State Department. These responses do not fundamentally address the concerns raised by this research.”
Our Take: As Jacob Mchangama and Hirad Mirami have argued, “To now deport people for unpopular opinions is not merely a constitutional gray zone. It is a betrayal of the very idea that truth and progress emerge from argument, not conformity.”
» YouTube Testing AI Age Verification System in the U.S.
YouTube is testing a new artificial intelligence system in the U.S. that will verify users’ ages based on the types of videos they have been watching.
Details:
Users must be logged into their accounts for the system to work, and age assessments will ignore the birth date they enter.
If the system identifies a viewer as under 18, YouTube will impose the controls it already uses to prevent minors from watching videos deemed inappropriate.
This includes: reminders to take a break from the screen, privacy warnings, restrictions on video recommendations, and bans on ads tailored to individual tastes.
While people can still watch YouTube without an account, certain content is automatically restricted without proof of age.
» OpenAI Pivots from Blunt Refusals to ‘Safe-Completions’
OpenAI unveiled a new safety method in GPT-5 that replaces refusal-based training with safe-completions. OpenAI says this approach both reduces harmful outputs and avoids unnecessary shutdowns of legitimate speech.
Details:
Instead of flat refusals to dual-use prompts, which responses could be either harmless or harmful depending on user intent (e.g., asking how to ignite fireworks), GPT-5 now provides the safest possible helpful response, like pointing users to standards or vendor datasheets, while withholding unsafe specifics.
The shift marks a more output-centric safety architecture, meaning GPT-5 evaluates potential risks in its answers—not just in the user's question.
Safe-completions work by:
Penalizing unsafe outputs (severity-scaled), regardless of user intent.
Rewarding safe helpfulness, either by directly advancing the user’s benign goal or by giving an informative, constructive refusal.
Compared to o3 (a refusal-trained model), GPT-5:
Produces safer outputs (lower severity when it errs).
Provides more helpful answers in benign and dual-use contexts.
Handles “grey area” prompts with nuance, offering high-level guidance, standards, or resources rather than hard refusals or unsafe details.
Our Take:
Our Scholars Jacob Mchangama, Jordi Calvet-Bademunt, and Isabelle Anzabi updated our analysis of free-speech culture in major chatbots this year. They found that leading models showed notable progress, reducing refusal rates to controversial prompts from 50% in 2024 to 25% in 2025.
In this year’s exercise, OpenAI’s GPT-4o model performed worse compared to OpenAI’s GPT-3.5 last year. Nevertheless, the FoFS team is encouraged to see companies engaging seriously with issues of freedom of expression and access to information, and we look forward to further improvements in the next iteration of the study.
The Future of Free Speech in Action
Executive Director Jacob Mchangama published a public consultation (in Danish) about Denmark’s draft bill on countering deepfakes.
Senior Research Fellow Jordi Calvet-Bademunt spoke on AI governance and freedom of expression at a virtual event hosted by the Global Network Initiative on August 19, 2025, with participation from academia, civil society, and industry.
The 2025 Global Free Speech Summit returns on October 3–4 in Nashville, TN. Hosted by The Future of Free Speech and Vanderbilt University, this invite-only event will bring together global leaders, thinkers, and change-makers to confront the most urgent threats to freedom of expression and explore bold, resilient solutions. Attendees will have the opportunity to attend networking receptions where they can exchange ideas with speakers and other guests. Seats are limited, but we encourage you to request an invitation here.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.
Hirad Mirami is a research assistant at The Future of Free Speech and a student at the University of Chicago studying economics and history.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.