.exe-pression: March 2025
A Monthly Newsletter on Freedom of Expression in the Age of AI
TL;DR
Almost 9,000 stakeholders, including Big Tech companies, sought to shape the U.S. AI Action Plan with policy recommendations.
The House Judiciary Committee subpoenaed 16 major tech companies to investigate potential jawboning by the Biden-Harris Administration of AI.
U.S. State Department plans to employ AI to revoke student visas over alleged Hamas support.
Hungary seeks to ban LGBTQ+ pride events and to use facial recognition to identify participants.
China's new AI labeling measures require AI-generated content to include visible markers and embedded metadata.
The European Commission published the third draft of its General-Purpose AI Code of Practice.
X and Grok Chatbot sparks political controversy in India amidst censorship concerns.
Virginia's governor vetoed the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094) targeting high-risk AI systems. Texas overhauls TRAIGA.
Main stories
Big Tech’s response to influence U.S. AI Action Plan
The White House’s Request for Information (RFI) on implementing the AI Action Plan solicited 8,755 comments from industry, civil society, and other stakeholders. Major AI companies used this opportunity to outline how their interests align with the government’s. OpenAI raised concerns about China's AI advancements, describing DeepSeek as "state-controlled," and recommended that the U.S. consider banning such models to mitigate privacy and security risks.
Both OpenAI and Google advocated for the ability to train their AI models on copyrighted material under the fair use doctrine. They argue that such allowances are essential for maintaining the U.S.' competitive edge in AI, especially given that Chinese developers have fewer restrictions on data usage. This stance is part of their broader effort to influence the AI Action Plan to support innovation while addressing national security concerns.
The Future of Free Speech submitted comments to the RFI. We highlighted the critical role of the First Amendment in AI governance. While AI presents new challenges, regulatory approaches must be grounded in free speech principles and avoid overreach that could stifle free expression. You can read about our full recommendations in the post that Isabelle Anzabi, our AI Policy Research Associate, published at The Bedrock Principle.
To read more about the responses to the AI Action Plan, read Clara Apt’s and Brianna Rosen’s analysis in Just Security.
House Judiciary Committee subpoenas Big Tech over AI censorship
Representative Jim Jordan, Chairman of the House Judiciary Committee, issued subpoenas to 16 major tech companies, including Adobe, Alphabet, Amazon, Apple, Meta, Microsoft, and Nvidia. These subpoenas seek communications between these companies and the Biden-Harris Administration, aiming to determine “how and to what extent the executive branch coerced or colluded with artificial intelligence (AI) companies and other intermediaries to censor lawful speech.” This is the latest push by the Trump Administration and congressional Republicans to investigate whether the previous administration pressured or jawboned these companies into using AI to censor lawful speech, particularly conservative viewpoints.
To learn more about the subpoenas issued to Big Tech, read Tina Nguyen’s brief in the Verge.
U.S. State Department to employ AI to revoke student visas over alleged Hamas support
The AI-powered “Catch and Revoke” initiative implemented by the State Department will involve AI-assisted reviews of tens of thousands of student visa holders’ social media accounts. Social media reviews are particularly aimed at identifying signs of alleged support for terrorism expressed after Hamas’ October 7, 2023, attack on Israel. Officials will also review news reports on past demonstrations against Israel’s policies and lawsuits filed by Jewish students that allege foreign nationals engaged in antisemitism. Experts worry about the accuracy of AI tools to create deportation lists as well as the free speech implications of monitoring social media posts, which are likely to chill speech that would otherwise be protected by the First Amendment.
To learn more about this program, read Axios’ article with the scoop. For more on why the “Catch and Revoke” initiative threatens First Amendment rights, read Faiza Patel’s analysis in Just Security.
Hungary Seeks to Ban Pride and to Use Facial Recognition
In Europe, free expression concerns have also emerged around government use of AI. Hungary’s parliament recently moved to ban an LGBTQ+ Pride event and to employ AI-powered facial recognition technology to identify participants defying the ban. The measure amends the country’s law on assembly to criminalize organizing or attending events that violate Hungary’s controversial “child protection” legislation, which prohibits any “depiction or promotion” of homosexuality to minors under 18. EU officials swiftly condemned the proposal.
The lead rapporteur for the EU AI Act warned that it would “clearly breach” the new AI law, which prohibits the biometric surveillance of peaceful protesters. A Commission spokesperson noted that the legality of Hungary’s use of facial recognition would depend on whether the police deploy the technology in real time or after the event.
To learn more about the new law, read Ashifa Kassam’s article in The Guardian. More about the EU’s reaction in this Euronews’ article.
China releases notice requiring labeling of AI-generated content
Chinese regulators published Labeling Measures for Content Generated by Artificial Intelligence, establishing new measures to standardize the identification of AI-generated content and “to curb misinformation and enhance public trust in digital content.” AI service providers are required to add explicit and implicit labels to synthetic content through visible markers and embedded metadata. Additionally, service providers will be required to verify metadata and issue appropriate warnings accordingly. These new rules complement China’s broader AI regulatory framework and will become applicable in September 2025.
To learn more about the notice and its implications, read Yan Luo’s and Xuezi Dan’s brief in Covington.
European Commission released the third draft of the General-Purpose AI Code of Practice
The European Commission released the third draft of the General-Purpose AI Code of Practice. When approved, this Code will guide general-purpose AI providers in implementing the provisions of the AI Act on systemic risk and other requirements until harmonized standards are approved in a few years. The final draft of the Code is expected to be presented in May 2025 and subsequently considered for approval.
To understand why free-speech advocates should press for clear protections safeguarding freedom of expression in the AI Act implementation, you can read the article by our Senior Research Fellow, Jordi Calvet-Bademunt, Safeguarding Freedom of Expression in the AI Era. To learn more about the Draft of the General-Purpose AI Code of Practice, you can read the European Commission’s official press release.
X & Grok Chatbot Sparks Political Controversy in India Amidst Censorship Concerns
Elon Musk's AI chatbot, Grok, has ignited political debates in India due to its contentious responses that challenge right-wing narratives. Simultaneously, X (formerly Twitter) is resisting the Indian government's alleged misuse of the Information Technology Act for content blocking, highlighting tensions between tech platforms and governmental censorship efforts.
Prateek Waghre, formerly the Executive Director of the Internet Freedom Foundation, explains “From my perspective, what is even more concerning is the overt declaration [by the Indian government] of “informal conversations” outside of established procedures and a possible intent to target individuals for prompting Grok, implying that the principle of due process and Indian citizens will pay the price as three powerful actors selfishly pursue their own interests.”
We recently analyzed how six major chatbots responded to 40 controversial prompts and compared the findings to a similar study conducted one year ago. The results show meaningful progress, but some concerns remain. xAI’s Grok provided the most responses, while OpenAI’s ChatGPT ranked the lowest in terms of refusals to generate content. DeepSeek performed well—until China came up. The other companies analyzed were Google, Meta, and Anthropic. To learn more about X’s controversial role in India’s political discourse, read Prateek Waghre’s analysis in Tech Policy Press.
Virginia governor vetoes AI legislation targeting high-risk AI systems; Texas overhauls TRAIGA
Virginia Governor Glenn Youngkin recently vetoed the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094). The bill would have required developers of high-risk AI systems to document system limitations, ensure transparency, and manage risks associated with algorithmic discrimination. Deployers would have had to disclose AI usage to consumers and conduct impact assessments to evaluate potential risks. Additionally, outputs from generative AI systems would have had to be clearly identifiable, with limited exceptions for low-risk or creative applications.
Governor Youngkin stated, “[t]here are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more. HB 2094’s rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.”
This would have made Virginia the second state after Colorado to implement comprehensive AI regulations targeting high-risk AI systems and algorithmic discrimination. Adam Thierer from the R Street Institute states that this development “could be turning in a more positive, pro-innovation direction in the states.”
In related news, a Texas Representative unveiled a revised version of his “Texas Responsible AI Governance Act” (TRAIGA), a proposal that some criticized for placing interventionist policies that would limit AI innovation. The new version prohibits AI systems from intentionally engaging in “political viewpoint discrimination.”
To learn more about Virginia’s AI bill and its implications, read Zach Williams’ brief in Bloomberg Law. To learn about the criticisms towards Virginia’s and Texas’ bill, read Adam Thierer’s analysis.
Links to Additional News
Industry:
OpenAI unveils new image generator for ChatGPT (The New York Times).
A federal judge in California denied Elon Musk’s attempt to block OpenAI from becoming a for-profit entity (CNBC).
Meta faces legal challenge by French publishers for copyright infringement over AI training (Bloomberg).
Anthropic wins early round in music publishers’ AI copyright case (Reuters).
Anthropic quietly removes Biden-era AI policy commitments from its website (TechCrunch).
Google reports scale of complaints about AI deepfake terrorism content to Australian regulator (Reuters).
People are using Google’s new AI model to remove watermarks from images (TechCrunch).
Google introduced Gemini 2.5 Pro Experimental as their most intelligent AI model (Google).
Baidu releases reasoning AI model to take on DeepSeek (Bloomberg).
DeepSeek releases V3 model update with significant improvements (Forbes).
Claude can now search the web in feature preview for all paid users in the U.S. (Anthropic).
NewsGuard launches new service to protect LLMs from foreign influence operations (NewsGuard).
Amazon to Reportedly Process Alexa Voice Recordings in the Cloud, Ending Local Processing (Gadgets360).
Italian newspaper says it has published world’s first AI-generated edition (The Guardian).
Government:
China to crack down on stock market fake news as AI spurs misinformation, says state media (Reuters)
UK lawmaker introduces AI Regulation bill in the House of Lords (PYMNTS)
Spain to impose massive fines for not labelling AI-generated content (Reuters).
New Texas bill could make incredibly popular anime & video games illegal (Dexerto)
California’s AI experts release draft report of recommendations for AI guardrails (CalMatters).
A well-funded Moscow-based global “news” network has infected Western artificial intelligence tools worldwide with Russian propaganda (NewsGuard)
Leaked data exposes a Chinese AI censorship machine (TechCrunch)
Hundreds of actors and Hollywood insiders sign open letter urging government not to loosen copyright laws for AI (CBS News)
The Future of Free Speech in Action
Founder and Executive Director Jacob Mchangama spoke at the Designing Trustworthy AI in a Polarized World conference hosted by the Hoover Institution and the Stanford Graduate School of Business, discussing free speech and political slant in AI.
In March, The Future of Free Speech also published The Future of Free Speech Index 2025, which found that global support for free-speech principles has been declining in recent years, even in democracies. The accompanying report shows that people are much less permissive with AI-generated content than with human-generated content.
We submitted comments to the White House’s Request for Information on implementing the AI Action Plan. We also provided feedback on the third draft of the EU’s General-Purpose AI Code of Practice.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.