.exe-pression: May 2025
A Monthly Newsletter on Freedom of Expression in the Age of AI
TL;DR
President Donald Trump signed the “TAKE IT DOWN Act,” the first federal law targeting nonconsensual intimate images, including AI deepfakes. It mandates removal within 48 hours but raises free speech concerns over its broad enforcement powers.
The House passed the “One Big Beautiful Bill Act,” including a 10-year ban on state-level AI regulation, sparking backlash from bipartisan state officials who warn it undermines existing and future laws. Procedural challenges are expected in the Senate.
Grok gave unrelated responses about “white genocide” in South Africa, prompting backlash and concerns about editorial influence. xAI stated that an unauthorized system modification caused it and promised to publish internal prompt guidelines.
Russia's Operation Overload used AI-generated media to spread disinformation about Ukraine and NATO, but failed to gain traction or influence public discourse due to low engagement.
Leaked guidelines from Meta show how its AI is trained to avoid controversial or harmful speech. Prompts are categorized as follows: “Tier one” content, like hate speech, is blocked outright, while “tier two” topics, like misinformation, are reviewed cautiously.
A study by researchers from top institutions accuses Chatbot Arena, a major crowdsourced AI ranking platform, of favoring big tech firms like Meta and Google through opaque private testing policies. The site’s operators deny the allegations.
The third part of the U.S. Copyright Office’s AI report explores how fair use doctrine applies to generative AI training, calling it often transformative. It advises against government intervention as the Office believes it “would be premature at this time.”
Main Stories

President Trump signs deepfake bill amid free speech concerns
President Trump signed the “TAKE IT DOWN Act,” which prohibits the nonconsensual online publication of intimate visual depictions of individuals, including AI-created “deepfakes.” Many states already criminalize the publication of non-consensual intimate imagery (NCII); however, the TAKE IT DOWN Act marks the first federal legislation to regulate NCII. The law was signed following the House of Representatives’ bipartisan 409-2 vote and the Senate’s unanimous approval. The bill’s sponsors state that the law marks a ”major victory for victims of online abuse.“
Importantly, this law requires covered platforms to remove such depictions within 48 hours of notification from a victim. Digital rights organizations have raised concerns about the free speech implications of the takedown provision. FoFS experts have pointed out that the Act responds to real harms, but in the hands of a government increasingly willing to regulate speech, its broad provisions provide a powerful new tool for censoring lawful expression, monitoring private communications, and undermining due process.
To understand free speech advocates’ concerns of the takedown provision that federal regulators are imposing on tech companies, you can read the article by our Senior Legal Fellow Ashkhen Kazaryan and our Communications Coordinator Ashley Haek, The Road to Enforcement Chaos: The Hidden Dangers of the TAKE IT DOWN Act. To learn more about the law, you can read the White House’s official press release.
House of Representatives passed 10-year moratorium on state-level AI enforcement
The U.S. House of Representatives passed the federal budget reconciliation bill, the One Big Beautiful Bill Act, that includes a provision prohibiting states from regulating AI and other automated systems for 10 years. The bill passed with a vote of 215 to 214 along partisan lines. Its exact language is that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decisions during the 10-year period” following its enactment.
However, 40 attorneys general signed a letter opposing the amendment, stating that “this bill will affect hundreds of existing and pending state laws passed and considered by both Republican and Democratic state legislatures.” As National Conference of State Legislatures Chief Executive Officer Tim Storey states, the bill is expected to face challenges in the Senate for violating the Byrd Rule, which prohibits the inclusion of “extraneous matter” in budget reconciliation.
To learn more about the bill, you can read Julia Edinger’s brief in GovTech and Justin Hendrix and Christiano Lima-Strong’s piece in Tech Policy Press.
Grok chatbot provided unrelated responses about South African racial politics
Grok produced responses about a “white genocide” in South Africa in response to unrelated prompts. This echoes views shared by Elon Musk, whose AI company operates the chatbot, and claims by the Trump Administration related to recent refugee actions for white South Africans.
“Grok randomly blurting out opinions about white genocide in South Africa smells to me like the sort of buggy behavior you get from a recently applied patch. I sure hope it isn’t. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them,” Paul Graham, a technologist and founder of Y Combinator, wrote on X.
In response to the incident, xAI stated that “an unauthorized modification was made . . . which directed Grok to provide a specific response on a political topic, violating xAI’s internal policies and core values.” Additionally, xAI promised to publicly publish its internal system prompts for Grok, which outlines the policy guidelines on how to respond to users. “The public will be able to review them and give feedback to every prompt change that we make to Grok,” the company said. “We hope this can help strengthen your trust in Grok as a truth-seeking AI.”
To learn more, you can read Kate Conger’s piece in The New York Times.
Russian disinformation campaign leaves no substantial impact
A Russian disinformation campaign known as Operation Overload has used AI-manipulated voices, forged logos, and faked headlines to impersonate trusted sources, such as media outlets, universities, and law enforcement. The latest face of the operation mimicked “more than 80 organisations in the first three months of 2025.” Its narratives focused on discrediting Ukraine and destabilizing European democracies by spreading election disinformation and eroding trust in NATO, media, and academia. However, there was no substantial impact because most of the operation’s content received limited engagement. As the Institute for Strategic Dialogue reported, “All of the other reviewed posts amassed little attention from real users and relied heavily on a bot network to generate likes and shares.”
To learn more, you can read the Institute for Strategic Dialogue’s investigation.
Leaked docs reveal Meta’s AI is constrained on controversial speech
Leaked documents from Scale AI, a data-labeling contractor, reveal Meta’s guidelines on how to evaluate its AI responses. This human feedback is intended to reduce model bias. The documents emphasize the importance of making the AI engaging and safe by identifying and rejecting prompts that could lead to inappropriate behavior on sensitive topics. The guidelines categorize prompts into the following tiers.
“Tier one” prompts are rejected outright for involving hate speech, child exploitation, or explicit content (e.g., roleplay based on Lolita).
“Tier two” content, such as anti-vaccine misinformation or gender identity discussion, is flagged for cautious review but not automatically blocked.
Romantic or “flirty” interactions are allowed if they remain non-explicit.
To learn more, you can read the story at Business Insider.
Tech giants have been accused of deliberately distorting AI rankings
A new paper from Cohere, Princeton, Stanford, and MIT researchers claims that Chatbot Arena’s unfair practices have led to a “distorted playing field” that benefits large companies, such as Google and OpenAI. Chatbot Arena is a popular crowdsourced AI benchmark created as a research project at the University of California, Berkeley. The study found that “undisclosed private testing practices benefit a handful of providers who are able to test multiple variants before public release and retract scores if desired.” This would result in a selective disclosure of performance results. However, the site’s operators contend that their policy for private testing is public and refute “factual errors” in the study and the subsequent conclusions drawn.
To learn more about the study, you can read Matthew Sparkes’ brief in New Scientist and Maxwell Zaff’s piece in TechCrunch.
Copyright Office releases third part of its Copyright and AI Report
The U.S. Copyright Office released the pre-publication version of Part III of its Copyright and Artificial Intelligence Report. This section focuses on “Generative AI Training” and draws particular attention to how the fair use doctrine should apply in this context, which is an area of ongoing litigation and inquiry. The report details several stages in the development and deployment of general AI models where the use of copyrighted materials for training could implicate copyright protections.
The report states, “[i]n the Office’s view, training a generative AI foundation model on a large and diverse dataset will often be transformative,” while noting that this is not absolute. In particular, it points out that “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.” At this point in time, the Copyright Office “recommends allowing the licensing market to continue to develop without government intervention.”
This pre-publication version was released “in response to congressional inquiries and expressions of interest from stakeholders.” A final version will be published in the future without any expected substantive changes in the analysis or conclusions.
Within days of the release of the AI report, President Trump reportedly fired the head of US Copyright Office, Shira Perlmutter. This follows the firing of the Librarian of Congress, who oversees the Copyright Office. In late May, a judge denied Ms. Perlmutter’s request for reinstatement, while a lawsuit challenging her dismissal continues.
To learn more, you can read the latest part of the report, a detailed analysis by Sidley, or a short brief by Baker Botts.
Links to Additional News
Industry:
Apple Eyes Move to AI Search, Ending Era Defined by Google (Bloomberg).
AI execs used to beg for regulation. Not anymore. (The Washington Post).
How Google forced publishers to accept AI scraping as price of appearing in search (Press Gazette).
Opt out or get scraped: UK’s AI copyright shake-up has Elton John, Dua Lipa fighting back (CNBC).
UK’s AI Copyright: Nick Clegg says asking artists for use permission would ‘kill’ the AI Industry (The Verge).
Plastic, fantastic ... and potentially litigious: AI Barbie goes from dollhouse to courtroom (Reuters).
The Times and Amazon Announce an A.I. Licensing Deal (The New York Times)
Are Character AI’s chatbots protected speech? One court isn’t sure (The Verge).
The Stakes for OpenAI’s Plan B: A Public Benefit Corporation (The New York Times).
Elon Musk's Grok AI Will ‘Remove Her Clothes’ In Public, On X (404 Media).
OpenAI Is Wrapping Itself in the American Flag to Sell “Democratic AI” (Tech Policy Press).
Mistral claims its newest AI model delivers leading performance for the price (TechCrunch).
Meta to use EU user content for AI training as opt-out period ends (Yahoo).
Meta’s AI app is a nightmarish social feed (The Verge).
Google might replace the ‘I’m Feeling Lucky’ button with AI Mode (The Verge).
OpenAI’s flagship GPT-4.1 model is now in ChatGPT (The Verge).
Gemini 2.5: Our most intelligent models are getting even better (Google).
Anthropic debuts most powerful AI model yet that can work for 7 hours straight (CNBC).
China’s DeepSeek quietly releases upgraded R1 AI model, ramping up competition with OpenAI (CNBC).
AI chatbot “Debunk-Bot” challenges conspiracy theories in new study (CBS News)
It’s Still Ludicrously Easy to Jailbreak the Strongest AI Models, and the Companies Don’t Care (Futurism).
Government:
Joint statement on Artificial Intelligence and Freedom of Expression (UN, OSCE, OAS, ACHPR)
ChatGPT faces possible designation as a systemic platform under EU digital law (MLex).
EU Commission opens door for ‘targeted changes’ to AI Act (POLITICO).
The AI Office announced a tender with an estimated total value over €9M (European Commission).
The European Commission releases analysis of stakeholder feedback on AI definitions and prohibited practices public consultations (European Union).
“Towards A New Democratic Pact for Europe”: Secretary General 2025 Report (Council of Europe).
Commission seeks feedback on the guidelines on protection of minors online under the Digital Services Act (European Union).
Hungary is the first country to sign the UNECE WP.6 Declaration on the Regulation of Products with Embedded AI (UNECE).
Africa Declares AI a Strategic Priority as High-Level Dialogue Calls for Investment, Inclusion, and Innovation (African Union).
DRAFT Study on human and peoples’ rights and artificial intelligence, robotics, and other new and emerging technologies in Africa (African Commission on Human and Peoples’ Rights).
Former journalist Evan Solomon named first-ever federal AI minister of Canada (CTV News).
Saudi crown prince launches new company to develop AI technologies (Reuters).
Nvidia sending 18,000 of its top AI chips to Saudi Arabia (CNBC).
Is AI the future of America’s foreign policy? Some experts think so (NPR).
Trump to Rescind Chip Curbs After Debate Over AI Rules (Bloomberg).
Musk’s DOGE expanding his Grok AI in US government, raising conflict concerns (Reuters).
DOGE Put a College Student in Charge of Using AI to Rewrite Regulations (Wired).
US Border Agents Are Asking for Help Taking Photos of Everyone Entering the Country by Car (WIRED).
Dramatic rise in publicly downloadable deepfake image generators (Oxford Internet Institute).
Oregon Senate passes bill to ban sharing AI-generated ‘deepfake’ porn (KOIN Portland).
Attempt to tweak Colorado’s controversial, first-in-the-nation artificial intelligence law is killed (The Colorado Sun).
A call for Maryland deepfake laws to punish defamation, political deception (The Baltimore Sun).
Texas lawmakers push to regulate AI in government and the tech industry (The Texas Tribune).
State claims there’s zero high-risk AI in California government—despite ample evidence to the contrary (Cal Matters).
Concerns raised over AI trained on 57 million NHS medical records (New Scientist).
OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation (WIRED).
Leveraging the power of data for public and animal health (European Medicines Agency).
Portugal takes AI copyright battle to Brussels (Euro Weekly News)
Coalition of state attorneys general urge Meta to address AI sexual exploitation risks (South Carolina Attorney General).
Why Pope Leo chose his name: AI, workers’ rights, new Industrial Revolution (CNBC).
The Future of Free Speech in Action
At the Copenhagen Democracy Summit 2025, our Executive Director, Jacob Mchangama, discussed the impact of AI legislation in Europe and the United States on free speech, as well as the role of generative AI systems in today’s public sphere.
Senior Research Fellow Jordi Calvet-Bademunt was a featured speaker at AI For Prosperity, an event organized by the U.S. Department of State, DCN Global, and World Learning in Buenos Aires, Argentina. Jordi discussed the risks and opportunities that AI regulation presents for free expression and creativity in today's evolving landscape.
Senior Legal Fellow Ashkhen Kazaryan was a panelist at the seminar “Strategies for Bolstering the Public Sphere in the AI Era,” organized by the Information Technology and Innovation Foundation (ITIF) and the Mission Institute for Strategy. Ashkhen discussed the importance of algorithmic recommender systems, current policy proposals to regulate them, and the free speech protections that apply to U.S. private actors deploying such systems.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.