.exe-pression: July 2025
A Monthly Newsletter on Freedom of Expression in the Age of AI
TL;DR
Trump’s AI Action Plan and Executive Order on procurement aim to prevent “woke” AI in the federal government. This could have unintended consequences for free speech.
Missouri’s AG Targets AI Bias by demanding records from top AI companies over alleged anti-Trump output, accusing them of political censorship. The probe could pressure platforms to tilt AI outputs to favor conservative narratives.
The EU Unveils Its First AI Code of Practice, outlining transparency, risk, and safety duties for general-purpose AI models.
Turkey Bans Grok for Insulting Leaders, citing over 50 posts that allegedly threatened public order and defamed political and religious figures. The swift block highlights growing international efforts to restrict AI outputs deemed disrespectful.
Poland to Report AI Chatbot Grok to the European Commission after Grok generated antisemitic content and offensive remarks about Polish politicians.
X to Let AI Bots Fact-Check Posts through its Community Notes system, while keeping final approval in the hands of human contributors.
Russia and Belarus Launch “Patriotic AI” to counter Western influence and promote traditional values. Officials say it’s a necessary response to American platforms spreading extremist views.
China Launches Two‑Month Sweep to Combat “Harmful” Content, focusing on AI-driven addiction, emotional manipulation, and ideologically sensitive material.
Thailand Debuts AI to Shut Down “Illegal” Sites, including political dissent and monarchy criticism, predicting a sharp rise in takedowns.
Delhi Police Relied on Facial Recognition Technology for Riot Arrests, leading to wrongful detentions of Muslim men based solely on facial recognition matches.
Major Stories
» Trump Administration Issues Sweeping AI Action Plan Focused on Deregulation and ‘Ideological Neutrality’
President Trump released “America’s AI Action Plan” and signed three Executive Orders aimed at accelerating U.S. AI dominance while curbing perceived ideological bias in government-purchased AI systems.
Background:
The Plan aims to remove references to DEI, climate change, and misinformation in federal AI standards, directing NIST to revise its AI Risk Management Framework.
The Plan also recommends that NIST evaluate frontier models from China for alignment with Chinese Communist Party talking points and censorship.
A complementary executive order requires federally procured LLMs to be “truth-seeking” and “ideologically neutral.”
Response:
The Plan received praise from several conservative tech groups, industry leaders, and policy advocates as a bold step towards deregulation and promoting innovation.
The Center for Democracy and Technology called the Plan “highly unbalanced, focusing too much on promoting the technology while largely failing to address the ways in which it could potentially harm people,” while offering cautious support for the promotion of open-source and open-weight systems and increased focus on security.
Our Analysis:
FoFS argues that the AI Action Plan’s support for open-source and open-weight models is a positive step, recognizing their value for innovation, security, and research. However, this support must uphold true pluralism by ensuring government-backed models reflect diverse viewpoints, not ideological conformity.
The Plan raises free speech concerns by directing NIST to revise the AI Risk Management Framework to remove references to misinformation, DEI, and climate change. While some of these initiatives have been controversial, the Administration should not replace one agenda with another or use AI policy to enforce ideological conformity.
We state that “[w]hile framed as a defense of free speech and American values, advocates should be vigilant to avoid a scenario where the Plan reshapes the AI landscape along narrowly defined ideological lines.”
Read our full response from Isabelle Anzabi and Jordi Calvet-Bademunt at the Bedrock Principle.
» Missouri AG Berates AI Chatbots for Political Bias
Missouri Attorney General Andrew Bailey sent a formal demand letter to Google, Microsoft, OpenAI, and Meta regarding “biased and inaccurate responses” produced by their AI chatbots. The move has raised concerns about “jawboning” and government overreach into protected speech.
Details:
AG Bailey alleges the platforms’ chatbots censor content favorable to former President Trump while promoting criticisms of his administration.
He pointed to chatbot responses that ranked Trump last among the five most recent presidents in handling antisemitism, calling it “AI-generated propaganda masquerading as fact.”
The AG is demanding internal documentation on how political content is handled, including how datasets are selected and curated, whether chatbots are trained to treat political viewpoints differently, and why specific responses are generated.
He invoked Missouri consumer protection laws to justify these demands, arguing they are necessary to shield users from “manipulation.”
Context:
AG Bailey previously carried forward a high-profile lawsuit against the Biden administration for allegedly pressuring platforms to suppress disfavored views — a tactic legal experts refer to as “jawboning” — and his current actions reflect a similar strategy.
Bailey’s demands could compel platforms to alter or restrict chatbot responses based on political content, potentially chilling access to lawful information and setting a precedent for ideologically motivated state censorship.
Ashkhen Kazaryan and Ashley Haek warn that this move raises serious free speech concerns at the St. Louis Post-Dispatch.
» European Commission Releases First Compliance Guide for General-Purpose AI Models
The EU published its long-awaited General-Purpose AI Code of Practice, with relevant implications for free speech.
Details:
The Code of Practice is a voluntary tool that serves as a guide for industry on how to implement the regulations of the sweeping AI Act.
The Code was developed by independent experts with input from over 1,000 stakeholders.
It was endorsed by the Commission and the AI Office, and companies can rely on it to demonstrate compliance.
The Code has three chapters on transparency, copyright, and safety. The safety chapter outlines practices for managing systemic risk, a notion that has raised concerns for free expression, given its vagueness and breadth.
Response:
The European Commission also released a mandatory template for GPAI developers to summarize the content used to train their models.
Firms must disclose the top 10% of scraped internet domains by size, provide details on crawler behavior, and explain how they remove illegal content and honor opt-outs under the Copyright Directive.
The requirement applies across the model lifecycle, including post-market modifications and fine-tuning.
»Turkey Bans Elon Musk’s AI Chatbot Grok for ‘Insulting' Content
On July 9, a Turkish Court imposed a nationwide ban on Elon Musk’s chatbot Grok after it made “offensive remarks” about President Erdoğan, national figure Mustafa Kemal Atatürk, and religious figures.
Details:
The court found that the chatbot’s responses “threatened public order,” citing 50 AI-generated posts that it deemed “derogatory” or “vulgar.”
Under Turkish internet laws, content deemed “insulting to state figures” can be blocked without lengthy judicial review.
These restrictions were enforced by the Turkish Telecommunications Authority immediately following the court order.
» Poland is Reporting AI Chatbot Grok to the European Commission
Poland’s digitisation minister, Krzysztof Gawkowski, announced the government’s plan to ask Brussels to investigate Grok over offensive comments about Polish politicians.
Details:
Just a day before Poland’s July 9 announcement, Grok removed what it called “inappropriate” social media posts following complaints that it produced content with antisemitic tropes and praise for Adolf Hitler.
Gawkowski said the ministry will comply with current regulations and report the violation to the EU Commission to “investigate and possibly impose a fine on X.”
» X to Let AI Bots Fact-Check Posts with Human Oversight
Elon Musk’s X is expanding its Community Notes program by allowing AI bots to propose fact-checks on posts, although human users will still be responsible for approving them.
Details:
The program, launched initially as Birdwatch, crowdsources contextual notes on misleading or false posts and has become a model for user-driven fact-checking.
The new feature will let developers build AI bots that propose draft notes, which will undergo the standard approval process and be clearly labeled as AI-generated.
» Russia and Belarus Unveil Censored ‘Patriotic AI’ To Rival the West
Moscow and Minsk have launched a joint artificial intelligence project called “the patriotic chatbot” to address Western “manipulation” and uphold “traditional values.”
Sergey Glazyev, secretary of the supranational Union State of Russia and Belarus, accused American-based AI platforms of promoting “racist and extremist” views, which they seek to shield younger generations from.
» Thailand Launches AI Platform to Take Down Illegal Websites
On July 5, Thailand’s Ministry of Digital Economy and Society officially launched WebD, an AI-powered platform designed to detect and shut down illegal websites with unprecedented speed.
Details:
WebD uses AI to identify URLs, gather evidence, and submit takedown requests to the court without paperwork.
Officials predict a 70.7% increase in blocked URLs by the end of 2025.
Thailand’s “illegal websites” category includes cybercrime, but also websites that promote criticism of the monarchy and other political dissent.
» China Launches Two‑Month Sweep to Combat ‘Harmful’ Content
China’s Cyberspace Administration (CAC) has launched a broad enforcement campaign to combat “harmful” digital content directed at children, including AI-driven addiction, pornography, emotional extremism, and misuse of smart devices.
Background:
This campaign aims to further implement the Regulations on the Protection of Minors Online, which were issued last year.
Regulators will intensify scrutiny of “minor mode” settings, smart device safety, and AI content moderation. The campaign aims to suppress material deemed “vulgar” or ideologically problematic, such as superstition or materialism.
Enforcement may restrict platforms’ use of generative AI tools that produce emotionally persuasive or habit-forming content.
» Facial Recognition Technology Misused in Delhi
A collaborative investigation by The Wire and the Pulitzer Center details arrests during the 2020 Delhi riots based almost entirely on facial recognition technology (FRT), without credible corroborating evidence or eyewitness testimony.
Background:
Two Muslim men, Ali and Mohammed, spent years in pre-trial detention, suffered allegations of custodial torture, and endured severe financial, emotional, and health impacts.
They were arrested solely on the basis of FRT identifications that defense lawyers contest, including mismatched clothes, profiles, and absence of Test Identification Parades (TIPs).
Details:
Delhi Police reported using facial recognition in over 750 riot‑related cases, and that the results were presented as evidence against those arrested.
Therefore, at least 98.9% of all riot-related cases (758 in total) were “solved” with the help of facial recognition technology.
However, more than 80% of cases resulted in acquittals or discharges, “raising serious questions about the reliability of a technology the police appear to have relied on so heavily.”
Links to Additional News
Government:
Citing Risk to Kids, California Bill Targets Controversial AI Chatbots (STATESCOOP).
Denmark introduces legislation to protect its citizens from AI deepfakes (NPR).
South Korea says new AI model successful in authenticating deepfakes (MLex).
China releases AI action plan days after the U.S., as global tech race heats up (CNBC).
China is Quickly Eroding America’s Lead in the Global AI Race (The Wall Street Journal).
Notice of the Xiamen City Data Administration on the Announcement of the List of Artificial Intelligence Application Scenario Opportunities (CSET).
Hong Kong Opens Criminal Probe into AI-Generated Porn Scandal at City’s Oldest University (NBC News).
South Korea AI law under pressure for revision amid growing concerns (MLex).
Inside India’s Scramble for AI Independence (MIT Technology Review).
British 999 call handler's voice cloned by Russian network using AI (BBC).
A Marco Rubio Imposter is Using AI to Call High-Level Officials (The Washington Post).
New York Court Tackles the Legality of AI Voice Cloning (Skadden).
This congressman wants to ban companies from using your search history to set personalized prices (NBC News).
New York moves to dismiss retail federation's US algorithmic pricing complaint (MLex).
Federal court rules Kansas legislators tried to suppress speech with 2021 voting law (The Wichita Eagle).
Ohio Schools Must Set AI Policies by Mid-2026 (Axios).
Strict liability for AI damages called for by EU Parliament study (MLex).
UN adds to global AI governance push with centralized standards database (MLex).
Creative and AI sectors meet as the UK Government launches AI and copyright working groups (JD Supra).
European Artists Unite in A Powerful Campaign Urging Policymakers to “Stay True To the [AI] Act” (IFPI).
AI Must Have Ethical Management, Regulation Protecting Human Person, Pope Says (National Catholic Reporter).
Industry:
Web Giant Cloudflare to Block AI Bots from Scraping Content by Default (CNBC).
Airbus, ASML, Mistral Bosses to Ask EU to Pause AI Rules (The Wall Street Journal).
AI Summaries Cause "Devastating Drop” in Audiences, Online News Media Told (The Guardian).
Generative AI, Not Ad Tech, is the New Antitrust Battleground for Google (DIGIDAY).
Nvidia Backed Perplexity Launches AI Browser to Take On Google Chrome (Reuters).
YouTube ‘clarifies’ its plan to demonetize spammy AI slop (The Verge).
YouTube to roll out new AI-powered technology aimed at identifying teen users (CBS News).
Teachers Union Partners with Anthropic, Microsoft, and OpenAI to Launch AI Training Academy (CBS News).
Alibaba-backed Moonshot Releases New Kimi AI Model that Beats Chat GPT, Claude in Codin, and Costs Less (CNBC).
Researchers from OpenAI, Anthropic, Meta, and Google Issue Joint AI Safety Rating— Here’s Why (ZDNET).
When AI Imagines a Tree: How Your Chatbot’s Worldview Shapes Your Thinking (HAI).
The Future of Free Speech in Action
Senior Research Fellow Jordi Calvet-Bademunt discussed generative AI’s impact on free speech and the liability for AI-generated content on July 18, 2025, as part of Columbia University’s seminar on freedom of expression in the digital realm.
We released commentary on the newly released America’s AI Action Plan, where we explore “The Anti-’Woke’ AI Agenda: Free Speech or State Speech?”
The 2025 Global Free Speech Summit returns on October 3–4 in Nashville, TN. Hosted by The Future of Free Speech and Vanderbilt University, this invite-only event will bring together global leaders, thinkers, and change-makers to confront the most urgent threats to freedom of expression and explore bold, resilient solutions. Attendees will have the opportunity to attend networking receptions where they can exchange ideas with speakers and other guests.
Seats are limited, but we encourage you to request an invitation here.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.
Ava Sjursen is a communications intern at The Future of Free Speech and a student at Boston College studying communications and political science.
Hirad Mirami is a research assistant at The Future of Free Speech and a student at the University of Chicago studying economics and history.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.