.exe-pression: September 2025
A Newsletter on Freedom of Expression in the Age of AI
This edition of exe-pression comes out a little later than usual, as our team was busy organizing the 2025 Global Free Speech Summit — a gathering of leading free speech advocates, activists, and experts. A huge thank-you to everyone who joined us in Nashville! If you couldn’t make it this year, be sure to subscribe to our email list to receive information about next year’s event.
TL;DR
California Deepfake Law Struck Down: A federal judge struck down a California law prohibiting political deepfakes before elections on First Amendment grounds, while enjoining a related piece of legislation forbidding digitally altered political ads.
Huawei’s Politically “Sanitized” Chatbot: The Chinese firm has developed what it calls a “safety-focused” AI model based on DeepSeek, claiming it can almost fully censor politically sensitive topics.
China Moves to Enforce AI Labeling Rules: Beijing’s cyberspace regulator announced it will penalize companies that fail to comply with new requirements for labeling AI-generated content, expanding measures aimed at shaping both domestic and international norms.
Italy Passes Comprehensive AI Law: Italy became the first EU member state to pass comprehensive AI regulations into law, aligning itself with the standards established by the EU’s AI Act.
OpenAI Announces Teen Safety Features for ChatGPT: As parents and online safety advocates testified in the Senate to support more online AI regulation geared at protecting teens, OpenAI’s CEO published a blog post detailing its model’s new safety features.
Kansas Students Push Back Against AI Surveillance: A lawsuit in Kansas challenges the use of Gaggle, an AI surveillance tool that monitors student communications for “risky behavior,” arguing it chills free expression and raises privacy concerns.
Governor Newsom Signs AI Safety Bill: Governor Gavin Newsom signed SB 53, which requires major AI developers to disclose risk plans and safety incidents.
Major Stories
» Another California Deepfake Law Falls on First Amendment Grounds
A federal judge has struck down California’s Assembly Bill 2839 restricting AI-generated deepfakes during elections, finding them unconstitutional under the First Amendment.
Background:
In our August edition, we covered U.S. District Judge John Mendez’s earlier ruling against another California deepfake law (AB 2655), where he invalidated the measure on Section 230 grounds without reaching the free speech claims.
Assembly Bill 2839 would have banned AI-generated “materially deceptive content” in political communications 120 days before an election and, in some cases, 60 days after an election.
The New Ruling:
Judge Mendez has now struck down Assembly Bill 2839 outright on First Amendment grounds, ruling that California cannot preemptively censor political content, even if it is AI-generated or misleading.
U.S. District Judge John Mendez wrote: “AB 2839 suffers from ‘a compendium of traditional First Amendment infirmities,’ stifling too much speech while at the same time compelling it on a selective basis … When it comes to political expression, the antidote is not prematurely stifling content creation and singling out specific speakers but encouraging counter speech, rigorous fact-checking, and the uninhibited flow of democratic discourse … Just as the government may not dictate the canon of comedy, California cannot pre-emptively sterilize political content.”
Free Speech Implications: In a column at MSNBC, Executive Director Jacob Mchangama argued that California’s law would “chill political speech, infringe on Californians’ ability to criticize politicians, undermine platforms’ rights to moderate content, and even prevent people from highlighting ‘deceptive’ content as fake.” Jacob also cited research showing how the extent and impact of disinformation, including deepfakes, are typically much smaller than alarmist narratives assume.
» Huawei Creates Politically ‘Sanitized’ Version of DeepSeek
Huawei, a major Chinese tech company, has developed an AI model based on the Chinese firm DeepSeek’s open-source R1 model, which it claims is “nearly 100% successful” in blocking all politically sensitive content.
Details:
Chinese law requires all AI models developed in the country to reflect “socialist values” before being released to the public. This entails censoring references to politically sensitive topics in the country.
Huawei’s “safety-focused” version of DeepSeek, called DeepSeek-R1-Safe, defends against what the firm calls “common harmful issues… including toxic and harmful speech, politically sensitive content, and incitement to illegal activities.”
It is unclear what specifically constitutes “toxic and harmful speech” and “politically sensitive content” in the firm’s view, beyond standard restrictions on anti-China, anti-Xijinping, and anti-Communist Party content, which all Chinese AI developers prohibit.
Huawei’s AI “safety” announcements come as many prominent Western AI firms, like OpenAI, are seeking to promote viewpoint diversity in favor of expanding users’ free speech rights when engaging with chatbots.
» China Moves to Enforce AI Labeling Rules
China’s Administrative Measures for the Labeling of AI-Generated Content have come into force and require that AI-generated content be clearly labeled when distributed on Chinese platforms.
Background:
China’s new AI content labeling rules took effect in September, requiring both visible and embedded tags across content production, dissemination, and distribution.
China also issued the 2.0 edition of the Artificial Intelligence (AI) Safety Governance Framework, which “refines risk categories, explores risk-grading strategies and updates prevention measures dynamically.” The framework builds out a broader system for evaluating and managing “AI risks,” further integrating speech and information control into China’s AI governance architecture.
Details:
The Cyberspace Administration of China (CAC) issued the rules alongside mandatory national standards and practice guidelines to support implementation.
The deputy director general of the Cyberspace Administration of China’s Network Management Technology Bureau announced that companies that fail to comply with new requirements for labeling AI-generated content will be penalized.
Major platforms — including ByteDance’s Douyin, WeChat, Weibo, RedNote, and AI startup DeepSeek — have already adopted compliance measures.
Free Speech Implications: Mandatory labeling requirements give the state broad power over digital expression. Enforcement raises risks of censorship and overreach, particularly as labeling frameworks become tied to political content controls.
» Italy First to Pass Comprehensive AI Law in EU
Italy’s new national AI law introduces criminal penalties for harmful AI use, limits youth access, and enforces oversight.
Details:
Italy officially passed a national AI law, becoming the first EU member state to adopt a comprehensive AI regulatory framework.
The law is broadly aligned with the EU’s AI Act, aiming to ensure AI remains “human-centric, transparent and safe,” while encouraging innovation, privacy, and cybersecurity protections.
Key Provisions:
The law introduces prison sentences for spreading AI-generated or manipulated content (deepfakes) that causes harm.
That law dictates that children under the age of 14 will need parental consent to access AI tools.
Enforcement responsibility is assigned to the Agency for Digital Italy and the National Cybersecurity Agency.
Free Speech Implications: As an early adopter of this expansive national regulation, Italy may set a model for other EU states, pushing criminalization of AI harm into broader European norms — with implications for cross-border expressive content.
» OpenAI Creates Teen Safety Features for ChatGPT
In a September blog post, OpenAI’s CEO Sam Altman offered an overview of GPT-5’s new teen safety features and how they might be reconciled with the company’s commitment to privacy and free expression.
Background:
In August, OpenAI was sued by a family alleging that ChatGPT aided their child in committing suicide.
OpenAI’s CEO announced its new teen safety features on the same day that online safety advocates and parents testified in a Senate hearing to promote online safety regulation aimed specifically at AI developers.
This also comes as the FTC launched an inquiry into “AI chatbots acting as companions.” The FTC issued orders to seven AI providers seeking information on how these companies “measure, test, and monitor potentially negative impacts of this technology on children and teens.”
OpenAI also detailed new parental controls in a separate blog post. These controls allow parents to link accounts, create teen-specific model rules, disable features, receive alerts if the system indicates the teen is in distress, and set blackout hours during which a teen cannot use ChatGPT.
Details:
OpenAI’s new teen safety features aim to prevent the model from engaging in flirtatious content or encouraging self-harm, especially for users under 18.
Given that the product is allowed for those 13 and over, the CEO explained the need for more extensive safety guardrails for those between the ages of 13 and 18.
For instance, the model engaging in flirtatious content — which users above 18 can access — is strictly prohibited for those under 18. Similarly, while adults can ask for fictional scenarios of suicide, such queries are explicitly blocked for teen users.
Altman stated, “We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”
» AI Safety Tool Sparks Student Backlash against Surveillance
An AI-powered safety tool, Gaggle, monitors student communications for “risky behavior” in schools. Lawrence High School students, in Kansas, sued “the school district in August to stop its use, alleging that Gaggle’s surveillance is unconstitutional and prone to misfires.”
Background:
In 2023, Lawrence High School installed Gaggle Safety Management, which scans student emails, papers, and uploads. The district signed a three-year contract, joining over 1,500 U.S. districts across the country using this system.
Officials defend the tool as enabling life-saving interventions where students are at risk of suicide or violence.
In other instances, LGBTQ students were potentially outed when Gaggle flagged their messages about sexual identity and mental health.
Students report overreach:
Between Nov. 2023 and Sept. 2024, the system flagged 1,200 items, of which 800 were deemed “nonissues.”
The Washington Post reports that it deleted part of an art student’s portfolio after it mistakenly flagged a photo of girls wearing tank tops as child pornography.
One student was questioned by the school administration after joking about dying during a fitness test. Another was “Gaggled” for writing “mental health” in a college essay.
Free Speech Implications: The system can have a chilling effect on expression, with students self-censoring essays, emails, and discussions of sensitive issues like mental health. AI surveillance can also violate student privacy and access to information online.
» Governor Newsom Signs California AI Safety Bill
Governor Gavin Newsom signed SB 53, also known as the Transparency in Frontier Artificial Intelligence Act, which requires large AI companies to disclose their safety plans and report any incidents.
Background:
The new law builds on recommendations from California’s first-in-the-nation report on “workable guardrails based on an empirical, science-based analysis of the capabilities and attendant risks of frontier models.”
Newsom vetoed a more expansive version of the legislation last year, noting concerns over its impact on AI innovation in the state.
The Law:
Large frontier developers – companies with over $500 million in annual revenue that develop “frontier” AI models – will need to develop and publish a frontier AI framework to manage “catastrophic risk.”
All frontier models (those trained using more than 10^26 floating-point operations) will need to publish transparency reports and report critical safety incidents, among other obligations.
Enforcement powers rest with the California Attorney General, with civil penalties of up to $1 million per violation.
Key Provisions:
The law includes whistleblower protections, a mandate to disclose redacted safety protocols, and requirements to report “critical safety incidents” (e.g., misuse, loss of control).
Free Speech Implications: Compelled disclosure of internal frameworks could pressure firms to reveal expressive design decisions. However, transparency empowers the public and watchdogs with more information, strengthening debate and oversight of powerful actors.
The Future of Free Speech in Action
The 2025 Global Free Speech Summit took place on October 3–4 in Nashville, TN. This year’s program spotlighted our large-scale AI project with a panel featuring Senior Research Fellow Jordi Calvet-Bademunt, who discussed AI legislation, the corporate practices of major AI providers, and their implications for freedom of expression. The Summit also featured a fireside chat about the future of AI between Executive Director Jacob Mchangama and Tyler Cowen (George Mason University), as well as a session by Jacob Shapiro (Princeton University) on free expression in generative AI tools, presenting additional findings included in our AI project.
The Future of Free Speech and the R Street Institute co-hosted “Free Speech and Section 230 in the Age of AI: Should LLMs Be Liable for Misinformation?” on September 10, 2025. Senior Legal Fellow Ashkhen Kazaryan spoke during the fireside chat on algorithmic recommendations and participated in the panel on platform liability in the AI era, addressing the implications for free expression online.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.
Hirad Mirami is a research assistant at The Future of Free Speech and a student at the University of Chicago studying economics and history.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.