TL;DR
New Report Tests Chatbots and Laws on Free Speech: The Future of Free Speech’s report ranks major AI models and laws by their handling of contentious topics. Grok 4 and GPT-5 were most open; China’s DeepSeek and Qwen were most restrictive. The U.S. ranked first for protecting AI-related free speech, with the EU a close second, while China ranked last due to a web of state-imposed content controls.
Dutch Probe Finds Chatbots Gave Biased Election Advice: The Dutch privacy regulator reported that OpenAI’s GPT-5, xAI’s Grok 4, and Mistral 3.1 provided skewed voting recommendations ahead of national elections, potentially breaching the EU AI Act — risking government overreach regarding political advice.
California and Congress Target AI Companions for Minors: California became the first state to regulate AI chatbots designed for emotional companionship, requiring age verification, transparency, and mental health safeguards. In Congress, a bipartisan bill proposes a nationwide ban on AI “companions” for minors, which would limit minors’ expressive rights.
UN Declaration on AI and Expression: UN rapporteurs issued a joint declaration urging both protection of free expression across the AI lifecycle and stronger measures against hate speech/disinformation — a tension that could fuel overbroad moderation.
India Floats AI Labelling Rules: India’s MeitY proposed IT Rule amendments would require visible labels and metadata for synthetic content — a transparency move that risks stigmatizing and chilling legitimate expressive uses unless narrowly tailored.
Robby Starbuck Sues Google: Conservative activist Robby Starbuck sued Google over AI products that allegedly generated defamatory claims. The case (after a prior settled suit versus Meta) could reshape liability for AI-generated falsehoods and influence platform moderation behaviors.
Sora 2 Deepfake Controversy: OpenAI’s Sora 2 generated videos of deceased public figures, prompting a pause on some depictions and sparking debate over platform policy and the balance between creative free expression and publicity rights.
Major Stories
» New Report Tests Chatbots and AI Laws on ‘Free Speech Culture’
A new study by The Future of Free Speech examines how public and private systems of governance shape the ways generative AI influences free expression and access to information worldwide.
The report, That Violates My Policies: AI Laws, Chatbots, and the Future of Expression, ranks six major jurisdictions by how they protect free expression in AI legislation and eight leading chatbots by how they handle contentious topics.
Findings:
xAI’s Grok 4 topped the ranking for openness to contested speech, followed by OpenAI’s GPT-5, Anthropic’s Claude Sonnet 4, Google’s Gemini 2.5 Flash, Meta’s Llama 4, and Mistral Medium 3.1.
China’s DeepSeek and Alibaba’s Qwen were found to be the most restrictive, routinely avoiding politically sensitive topics.
Across all models, “hard moderation” (outright refusals to generate content) is down, but “soft moderation” — steering users away from difficult subjects — remains.
The U.S. currently leads the legal ranking for free speech protection in AI, although emerging state laws on “deepfakes” could narrow that lead, and the federal administration’s war on “Woke AI” may further restrict it.
While the EU also performed strongly, vague provisions in the AI Act and Digital Services Act risk creating “a culture of self-censorship,” while China’s AI laws hard-code censorship into model design.
Free Speech Implications: “In our tests, some models are starting to engage instead of evade, but the rules shaping those choices are still vague and shifting. If democracies want a pluralist AI ecosystem, both lawmakers and companies need clearer, rights-based guardrails.” — Jordi Calvet-Bademunt
» Dutch Probe Finds Chatbots Gave Biased Election Advice
The Dutch Data Protection Authority (AP) found that several leading AI chatbots — including OpenAI’s GPT-5, xAI’s Grok 4, and Mistral Medium 3.1 — provided distorted or incomplete voting recommendations ahead of the Netherlands’ parliamentary elections.
Details:
The AP reported that the chatbots overrepresented right-wing PVV and Green-Labor parties, while omitting others entirely. The regulator warned that such skewed “voting advice” could undermine the integrity of democratic processes.
Although the AI Act’s enforcement regime won’t take effect until August 2026, the Dutch authority argued that chatbots giving voting advice could be classified as a high-risk system. The findings were shared with the European Commission, and the companies could face early scrutiny or private litigation.
The models under investigation all signed the EU’s Code of Practice for General-Purpose AI, committing to risk assessments that guard against manipulation of electoral outcomes.
Free Speech Implications: Executive Director Jacob Mchangama cautions that government efforts to mandate “objective” political advice from chatbots could themselves endanger expression: “For governments to demand that chatbots only provide ‘objective’ political advice would risk subjecting generative AI to the whims of governments, who very obviously have an incentive to demand that chatbots generate answers and advice that align with whoever is in power,” he wrote. “Once again, the cure becomes worse than the disease.”
California Regulates AI “Companion” Chatbots for Minors
California became the first U.S. state to regulate AI chatbots designed for companionship, and Congress has introduced a federal bill to prohibit minors from using such bots. While this move is aimed at protecting children, it raises questions about expressive autonomy and platform liability.
Details:
Governor Gavin Newsom signed SB 243 into law, making California the first state to regulate AI companion chatbots aimed at vulnerable users and minors. The law takes effect January 1, 2026.
Requirements include: age-verification systems, clear disclosure that users are interacting with AI (not a human), protocols to detect and respond to suicide or self-harm, warnings for minors, and up to $250,000 penalties for certain illegal deep-fake violations.
This comes as some companies have already begun implementing safeguards for minors, as we covered in the September edition of .exe-pression.
Congress:
In the U.S. Congress, a bipartisan bill, the GUARD Act (Guidelines for User Verification and Responsible Dialogue Act of 2025), was introduced by Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT).
It would ban AI companion chatbots for minors, require clear disclosure that the bot is not a human, enforce age verification, and impose criminal penalties for bots that solicit self-harm or sexual content.
Free Speech Implications:
Restricting minors’ access to companion chatbots may limit minors’ ability to access information and to engage in creative or therapeutic uses of expressive AI tools.
Disclosure and age-verification duties impose burdens on platforms that may push providers to withdraw expressive AI experiences for minors entirely, rather than manage risk, risking over-cautious self-restriction.
» UN Rapporteur Issues Declaration on Artificial Intelligence and Freedom of Expression
The UN and regional special rapporteurs on freedom of expression issued a Joint Declaration on AI, Freedom of Expression and Media Freedom, setting out seven principles to guide how states, companies, and civil society should uphold free expression across the AI lifecycle.
The Principles Affirm:
(a) freedom of opinion and expression must be protected at every stage of AI design, development, and deployment;
(b) AI systems should promote a pluralistic, diverse information environment and not entrench corporate concentration;
and (c) rights to privacy, equality, and non-discrimination must be safeguarded throughout the AI lifecycle.
Dive Deeper:
Specific provisions state that “the right to freedom of opinion and expression is integral to human dignity, autonomy and creativity, and must be embedded throughout the lifecycle of AI, including its design, development, training and deployment.”
It further warns that “the concentration of corporate power in AI technologies is a substantial risk to pluralism and should be mitigated through appropriate human rights-based regulation, to ensure transparency and accountability, and investment in alternative approaches which promote diversity.”
The declaration simultaneously highlights the immense potential of AI to expand access to information and expression online, while also addressing harms, such as the dangers of overmoderation in AI governance.
Free Speech Implications:
The declaration emphasizes the importance of considering free speech implications when addressing AI governance and explicitly cautions against over-moderation.
This emphasis within the specific provisions mentioned above is consistent with our Freedom of Expression in Generative AI Models chapter in our AI report — That Violates My Policies: AI Laws, Chatbots, and the Future of Expression.
However, by simultaneously directing states to counter harmful content while protecting speech, the declaration risks ambiguous enforcement and reflects ongoing tensions in international standards on freedom of expression.
» India Proposes Mandatory Labelling of AI-Generated Content
The Ministry of Electronics and Information Technology (MeitY)’s draft amendments to India’s IT Rules propose the use of visible labels, embedded metadata, and platform verification duties for AI-generated media. This transparency proposal could also stigmatize satire, journalism, and art if enacted in its current form.
Details:
MeitY’s proposed amendments to the Information Technology (IT) Rules from 2021 seek to impose stricter duties on major digital platforms and intermediaries that distribute or create AI-generated content.
Proposed measures include visible labeling (watermarks/audio markers), embedded metadata for provenance, user declarations about synthetic elements with proportionate platform checks, and potential loss of safe-harbour protections for noncompliant intermediaries.
This comes as Bollywood celebrities increasingly rush to Indian courts seeking protection for their “personality rights” against AI-generated deepfakes, which sparks fears of Indian judicial overreach.
Free Speech Implications:
Mandatory labelling could stigmatize artistic, satirical, or journalistic uses of generative AI, while stringent compliance requirements may prompt platforms to over-moderate rather than risk liability, thereby chilling lawful expression.
Since this is a draft consultation, the final impact will depend strongly on the thresholds, verification design, and enforcement rules that will be adopted.
To learn more about India’s AI regulations and freedom of expression, read Dr. Sangeeta Mahapatra’s chapter from our report — That Violates My Policies: AI Laws, Chatbots, and the Future of Expression.
» Robby Starbuck Sues Google Over AI-Generated Defamation
Conservative activist Robby Starbuck’s lawsuit claims Google’s chatbots produced false, defamatory claims about him. The case follows a settled 2025 defamation lawsuit against Meta and could shape liability rules for AI-generated speech.
Background:
The complaint filed in Delaware state court alleges Google’s Bard/Gemma generated false statements labeling and linking him to sexual-assault claims and extremist actors, citing fabricated sources and reaching large audiences. He is seeking significant damages.
Starbuck previously filed a separate defamation lawsuit against Meta in April 2025, which was settled in August 2025 and resulted in Starbuck being brought on as an advisor to address “ideological and political bias.”
Free Speech Implications:
The case raises fundamental questions of who is responsible when an AI produces false yet expressive content, especially under current defamation law.
A successful outcome could extend the liability for AI-generated speech, potentially encouraging tighter filters or more risk-averse behavior by platforms, with consequences for lawful but controversial expression.
» Sora 2 Videos of Deceased Public Figures Spark Legal Alarm
The launch of OpenAI’s Sora 2 app, which enables users to generate short videos of deceased public figures, has raised concerns about legal and expressive rights. This led to OpenAI pausing certain depictions (e.g., Martin Luther King Jr.).
Background:
Sora 2 enabled short AI-generated videos depicting recently deceased historical figures. Viral examples of deceased public figures provoked outcry from families, civil society groups, and legal experts. Living people must give consent to be featured on Sora 2.
OpenAI responded by pausing depictions of MLK and announced mechanisms for requests to block representation of “recently deceased” public figures.
“While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used,” OpenAI stated.
Earlier this month, OpenAI replaced its “opt-out” system with an “opt-in-like” framework as a result of backlash from rightsholders and Japanese lawmakers.
Free Speech Implications:
The issue sits at the intersection of expressive freedom (e.g., satire, historical reinterpretation, artistic commentary) and publicity rights — especially when AI enables novel media forms. This incident illustrates how platforms’ reactive policies shape expressive norms in the age of generative AI videos.
The lack of clear legal precedent for deceased public figure deepfakes means creatives and platforms face uncertain risk, which may lead to self-censorship of legitimate forms of expression.
Senior Legal Fellow Ashkhen Kazaryan noted that, regarding Section 230, “unless there’s federal legislation on the issue, it’s going to be legal uncertainty until the Supreme Court takes up a case — and that’s another two to four years,” adding that permitting depictions of the dead “is their way of dipping their toe in the water.”
The Future of Free Speech in Action
Senior Research Fellow Jordi Calvet-Bademunt presented our AI report’s findings at “CELE Workshop 2025 Conference: Towards a Free Internet” hosted by the University of Palermo, as well as in two sessions for students at Vanderbilt University. Jordi also presented the findings at the “Global Free Speech Summit,” hosted by The Future of Free Speech and Vanderbilt University, along with co-authors Ge Chen, Carlos Affonso Souza, Kyung Sin Park, and Jacob N. Shapiro.
During the same Summit, Executive Director Jacob Mchangama held a fireside chat on AI and the future of free speech with Tyler Cowen. The overall event featured a strong AI focus, including talks by Ben Brooks (Harvard University and Black Forest Labs) on the importance of open AI models for free expression, Alice Siu (Stanford University) on deliberative democracy and AI governance, and Matthew Johnson-Roberson (Vanderbilt University) on the future of AI and robotics.
Senior Legal Fellow Ashkhen Kazaryan participated in the panel “The Future of Speech Online 2025: The Age of Constitutional Evasion: Jawboning and Other Forms of Government Pressure to Control Private Speech,” where she discussed algorithms and Section 230, expanding on the analysis in her recent policy paper.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.
Hirad Mirami is a research assistant at The Future of Free Speech and a student at the University of Chicago studying economics and history.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.






