.exe-pression: June 2025
A Monthly Newsletter on Freedom of Expression in the Age of AI
TL;DR
Texas passed the “Texas Responsible Artificial Intelligence Governance Act,” enacting comprehensive AI regulation that targets behavioral manipulation, constitutional violations, discrimination, and the creation of harmful content.
U.S. judges sided with Anthropic and Meta in key AI copyright cases, but both rulings included significant caveats that leave the door open for future challenges.
The U.S. Senate rejected a controversial proposal to put a moratorium on state AI legislation in the “One Big Beautiful Bill Act.”
Officials and stakeholders are debating whether to pause or revise the EU AI Act, creating a window of opportunity for advocates of freedom of expression.
A major leak reviewed by ABC News (Australia) reveals how China systematically censors references to the 1989 Tiananmen Square massacre using AI tools and human moderators, with platforms like Douyin instructed to remove content depicting state violence.
Major stories
Texas Governor Signs Responsible Artificial Intelligence Governance Act
Governor Greg Abbot signed House Bill 149, the “Texas Responsible Artificial Intelligence Governance Act,” also known as TRAIGA. The law prohibits intentionally developing and deploying AI systems for: behavioral manipulation (encouraging physical harm or criminal activity), constitutional infringement (restricting federal constitutional rights), unlawful discrimination (targeting protected classes), and harmful content creation (producing child sexual abuse material, unlawful deepfakes, or explicit content involving minors). Texas is the fourth state to pass comprehensive AI regulation after California, Colorado, and Utah. TRAIGA will go into effect on January 1, 2026.
This law was originally intended as a comprehensive risk-based framework after the Colorado AI Act, but it underwent substantial modifications throughout the legislative process, as reported in March’s edition of .exe-pression.
To learn more about the legislation, you can read Jason M. Loring and Graham H. Ryan’s brief in The National Law Review.
Anthropic and Meta Win Rulings on AI Training — But with Caveats
A federal judge has ruled that Anthropic’s use of millions of copyrighted books without permission to train its chatbot, Claude, was legal under U.S. copyright law. The fair use doctrine allows the use of copyrighted works without permission in certain circumstances. AI companies argue that their systems create transformative content, aligning with the fair use doctrine. However, copyright owners claim that the use of their material poses a threat to their livelihoods.
U.S. District Judge William Alsup stated that “[t]he purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative” and qualified as “fair use” under U.S. copyright law. However, the company remains on the hook for copying and storing more than 7 million pirated books in a central library. Judge Alsup ordered a trial to determine the statutory damages resulting from this copyright infringement.
A different federal judge in San Francisco dismissed a similar lawsuit against Meta. Thirteen prominent authors had alleged that Meta infringed their copyrights by using pirated versions of their books to train its LLaMA model. U.S. District Judge Vince Chhabria ruled in Meta’s favor, stating that the plaintiffs advanced the wrong legal arguments and failed to present a sufficient factual record. Crucially, the judge clarified that his ruling did not determine whether Meta’s conduct was lawful, and he appeared to invite future lawsuits that are more carefully argued and better substantiated.
At stake in these cases is not just copyright law but the future of AI as a tool for expression. While one court recognized AI training as potentially transformative, the rulings reflect an ongoing and unsettled debate over whether and how fair use applies, especially given unresolved concerns about how training data is acquired and stored.
To learn more about Anthropic’s case, you can read Matt O’Brien’s brief in AP News or Blake Brittain’s piece in Reuters. You can read more about Meta’s case in Matt O’Brien’s and Barbara Ortutay’s article for AP.
U.S. Senate Rejects AI Regulation Moratorium
In a dramatic reversal, the U.S. Senate voted 99–1 in the early hours of July 1 to strike a proposed federal moratorium on state AI regulations from President Donald Trump’s sweeping “One Big Beautiful Bill Act,” a sprawling tax and immigration bill. The provision, championed by Sen. Ted Cruz (R-Texas), would have blocked states from enforcing or adopting most AI-related laws for up to ten years as a condition for receiving federal infrastructure funding.
The moratorium’s defeat followed a last-minute withdrawal of support by Sen. Marsha Blackburn (R-Tennessee), who had previously negotiated a compromise version that would have shortened the freeze to five years and included carveouts for child safety and publicity rights. On Monday evening, Blackburn announced she would instead introduce an amendment to remove the moratorium entirely, citing concerns that “the current language is not acceptable to those who need these protections the most.”
Her amendment passed with overwhelming bipartisan support. Only Sen. Thom Tillis (R-North Carolina) voted against it.
Opponents—including civil society groups, labor unions, 17 Republican governors, 40 state attorneys general, and over 260 state legislators—had warned that the moratorium would undermine state-level protections against AI-driven harms, such as algorithmic discrimination and biometric surveillance. Sen. Maria Cantwell (D-Washington) welcomed the outcome, saying, “We can't just run over good state consumer protection laws.”
To learn more, see Will Oremus’ brief in The Washington Post.
EU’s AI Act Faces Challenges
With Europe taking an early lead in AI regulation, the European Union’s rules on AI are being revisited, as industry, lawmakers, and safety campaigners launch renewed lobbying efforts. Henna Virkkunen, the European Commissioner for tech, has raised the possibility of pausing the application of parts of the AI Act. The European Commission had already opened the door for targeted changes in this law. Additionally, the Commission is considering proposing changes to EU copyright rules in response to AI’s growing impact.
The EU is set to reexamine all of its digital regulations in December. For the moment, the EU is pressing ahead with finalizing a code of practice for general-purpose AI models, which is expected to be released this summer and will provide guidance to AI providers on how to implement the AI Act.
As Senior Research Fellow Jordi Calvet Bademunt explained, the AI Act raises concerns regarding freedom of expression, particularly in relation to its obligations on societal or “systemic” risks. These provisions are broad and vague, leaving room for potential misuse. It is therefore essential for freedom of expression advocates to push for clear safeguards that ensure AI models can reflect a diversity of viewpoints and that the AI Act upholds the fundamental right to freedom of expression.
To learn more about the bill, you can read Pieter Haeck’s piece in Politico.
China Uses AI to Erase Tiananmen Square Massacre Mentions
A major leak of internal censorship files reviewed by the Australian Broadcasting Corporation provides insight into how China’s government uses AI and human moderators to systematically erase any reference to the 1989 Tiananmen Square massacre across domestic platforms. More than 230 pages of documents instructed the removal of content that “depicts state violence and include compilations of text, images and video content for reference.” They were meant to be distributed to multi-channel networks — firms that oversee content creators’ accounts across various social media and video platforms, including Douyin, China’s counterpart to TikTok.
A leaked 2022 Douyin training manual labels the Tank Man photo “subversive” and instructs censors to suppress even innocuous symbols like candles or flowers. Chinese platforms deploy computer vision, natural language processing, and filtering to scrub images, texts, and symbolic patterns, such as a lineup of one banana and four apples, for echoing the “Tank Man” image. This censorship framework is being applied to Chinese models, such as DeepSeek, leading them to refuse to answer questions about the Tiananmen massacre.
To learn more, read this article by ABC News (Australia).
Links to Additional News:
Industry:
Disney, Universal sue AI company for copying Minions, Avengers and ‘Star Wars’ characters (Los Angeles Times).
Getty Argues Its Landmark UK Copyright Case Does Not Threaten AI (Reuters)
BBC Threatens Legal Action Against AI Startup Over Content Scraping (The Guardian)
Anthropic wins key US ruling on AI training in authors' copyright lawsuit (Reuters).
Reddit sues AI startup Anthropic for breach of contract, 'unfair competition' (CNBC).
Meta sues AI ‘nudify’ app Crush AI for advertising on its platforms (TechCrunch).
Judge denies creating “mass surveillance program” harming all ChatGPT users (Ars Technica).
OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected (Ars Technica).
For Survivors Using Chatbots, ‘Delete’ Doesn’t Always Mean Deleted (Tech Policy Press).
News Sites Are Getting Crushed by Google’s New AI Tools (The Wall Street Journal).
Generative AI used to copy and clone French news media in French-speaking Africa (Reporters without Borders).
130,000 film and TV scripts stolen by AI companies, BFI report suggests (Film Stories).
Elon Musk says xAI will retrain Grok: 'Far too much garbage' (Business Insider).
X changes its terms to bar training of AI models using its content (TechCrunch).
Meta Aims to Fully Automate Ad Creation Using AI (The Wall Street Journal).
You Can Now Edit Videos With Meta AI (Meta).
Facebook is starting to feed its AI with private, unpublished photos (The Verge).
Nvidia's pitch for sovereign AI resonates with EU leaders (Reuters).
Paris-based Arlequin AI secures €4.4 million to fight disinformation with sovereign, unsupervised AI (EU-Startups).
Nvidia, Perplexity to partner with EU and Middle East AI firms to build sovereign LLMs (Computerworld).
“Yuck”: Wikipedia pauses AI summaries after editor revolt (Ars Technica).
AI chatbots tell users what they want to hear, and that’s problematic (Ars Technica).
AI Models And Parents Don’t Understand ‘Let Him Cook’ (404 Media).
Meta’s Oversight Board Combat Misleading Deepfake Endorsements By Changing Enforcement Approach (Oversight Board).
Deepfake Scams Are Distorting Reality Itself (Wired).
Welcome to Campus. Here’s Your ChatGPT. (The New York Times).
Meta plans to replace humans with AI to assess privacy and societal risks (NPR).
Online brothels, sex robots, simulated rape: AI is ushering in a new age of violence against women (The Guardian).
AI pioneer announces non-profit to develop ‘honest’ artificial intelligence (The Guardian).
Advanced AI suffers ‘complete accuracy collapse’ in face of complex problems, study finds (The Guardian).
A.I. Is Poised to Rewrite History. Literally. (The New York Times).
Meta AI users confide on sex, God and Trump. Some don’t know it’s public. (The Washington Post).
Anthropic C.E.O.: Don’t Let A.I. Companies off the Hook (The New York Times).
Man Killed by Police After Spiraling Into ChatGPT-Driven Psychosis (Futurism).
Alibaba, Tencent Freeze AI Tools During High-Stakes China Exam (Bloomberg).
How AI is Powering a Literacy Breakthrough in the Philippines (Microsoft).
Google offers AI training for media outlets (Ijnet).
Can AI speak the language Japan tried to kill? (BBC).
How Language Bias Persists in Scientific Publishing Despite AI Tools (HAI).
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study (TIME).
How we really judge AI. Most people evaluate AI based on its perceived capability and their need for personalization. (MIT News).
Forget optimists vs. Luddites. Most people evaluate AI based on its perceived capability and their need for personalization. (MIT News).
A team of MIT researchers are quantifying AI model uncertainty and addressing knowledge gaps. (MIT News).
Exploring the Dangers of AI in Mental Health Care (HAI).
3 Questions: How to help students recognize potential bias in their AI datasets (MIT News).
Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction (The Hacker News).
Medical AI firm says competitor hacked prompts to steal secrets (Bloomberg Law).
Government:
AI model providers signing EU code of practice to get grace period (MLex)
The NO FAKES Act Has Changed – and It’s So Much Worse (EFF).
Data bill opposed by Sir Elton John and Dua Lipa finally passes (BBC).
California AI Report Urges Greater Safeguards to Avoid “Irreversible Harm” (State of California).
Philippines Proposes Cybercrime Law Revisions, Including AI Regulatory and Liability Frameworks for Regulators (Manila Bulletin).
Australian State Announces $28 million AI Investment (News AU).
Global consensus grows on inclusive and cooperative AI governance at IGF 2025 (DigWatch).
European Commission launches public consultation on high-risk AI systems (European Commission).
European Commission seeks experts for AI Scientific Panel (European Commission).
Cologne Higher Regional Court rejected a preliminary injunction against Meta's use of personal data for AI training (Data Guidance).
Swedish PM calls for a pause of the EU’s AI rules (Politico).
Kazakhstan’s new AI law charts ambitious course inspired by EU (Euractiv).
Japan passes innovation-focused AI governance bill (IAPP).
UNDP launches Human Development Report 2025 in the Republic of Korea, emphasizing human choices in the age of AI (UNDP).
New Hampshire jury acquits consultant behind AI robocalls mimicking Biden on all charges (AP News).
Trump plans executive orders to power AI growth in race with China (Reuters).
US lawmakers introduce bill to bar Chinese AI in US government agencies (Reuters).
New GOP bill would protect AI companies from lawsuits if they offer transparency (NBC News).
New York State Updates WARN Notices to Identify Layoffs Tied to AI (Bloomberg).
Nevada updates illicit material laws to include AI-generated content (StateScoop).
Counseling by chatbot? New Jersey legislators act to ban AI in therapy (New Jersey Monitor).
NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’ (TechCrunch).
U.S. AI Safety Institute Transforms into the Pro-Innovation, Pro-Science U.S. Center for AI Standards and Innovation (Department of Commerce).
Wrap Up: Congress Must Ensure the Federal Government Has Tools to Deploy Artificial Intelligence Effectively and Efficiently (Committee on Oversight and Government Reform).
California Cops Investigate ‘Immigration Protest’ With AI-Camera System (404 Media).
Red Tape Isn’t the Only Reason America Can’t Build (The Atlantic).
Privacy Commissioner of Canada’s annual report underscores need to prioritize privacy in an increasingly data-driven world and AI (Office of the Privacy Commissioner of Canada).
OpenAI, DeepSeek among businesses to face privacy-policy review in South Korea (MLex).
AI textbooks targeted for phaseout under new government (The Korea Times).
OpenAI Shuts Down 10 Malicious AI Ops Linked to China, Russia, Iran, N. Korea (HACKREAD).
Development of AI systems: CNIL publishes its recommendations on legitimate interest (National Commission for Information Technology and Civil Liberties).
ISO and IEC publish new AI impact assessment standard (Freevacy).
AI Singapore and the United Nations Development Programme Collaborate to Close the AI Literacy Divide & Transform Communities in Developing Countries (UNDP).
A.I. Computing Power Is Splitting the World Into Haves and Have-Nots (The New York Times).
Latin American countries to launch own AI model in September (Reuters).
English-speaking countries more nervous about rise of AI, polls suggest (The Guardian).
Americans worry about AI in politics — but they’re more worried about government censorship (FIRE).
The Future of Free Speech in Action
Senior Research Fellow Jordi Calvet-Bademunt was a featured speaker in the Global Network Initiative & Digital Trust & Safety Partnership EU Rights & Risks Stakeholder Engagement Forum on June 3, 2025. Jordi discussed the notion and application of systemic risk obligations under the AI Act and the Digital Services Act.
Jordi will also be speaking on generative AI’s impact on free speech and the liability for AI-generated content on July 18, 2025, as part of Columbia University’s seminar on freedom of expression in the digital realm.
Senior Legal Fellow Ashkhen Kazaryan discussed AI and the First Amendment in the Under the Desk News podcast on June 5, 2025.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.
Hirad Mirami is a research assistant at The Future of Free Speech and a student at the University of Chicago studying economics and history.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.