.exe-pression: April 2025
A Monthly Newsletter on Freedom of Expression in the Age of AI
TL;DR
The U.S. House of Representatives passes “TAKE IT DOWN Act” to curb deepfake pornography, but critics warn of free speech risks.
X has challenged the constitutionality of Minnesota’s political deepfakes ban on First Amendment grounds.
The Indian government has introduced new measures to tackle deepfakes, raising concerns about freedom of expression.
The White House has issued new OMB memos aimed at streamlining federal AI adoption and simplifying prior risk-based requirements.
Russia is flooding the internet with false narratives to manipulate AI chatbot outputs on topics like the war in Ukraine.
OpenAI no longer considers mass manipulation and disinformation a critical risk and, along with Google, is accelerating the release of its models.
Rwanda hosted the first-ever Global AI Summit on Africa to shape the continent’s AI future. The Summit launched a new AI council and unveiled the Africa Declaration on Artificial Intelligence.
The European Commission launched its €200 billion AI Continent Action Plan to position Europe as a global AI leader. The strategy includes building 45 AI giga-factories and simplifying regulations to drive innovation, skills, and infrastructure across the EU.
Main stories
U.S. House passes deepfake bill despite free speech concerns
The U.S. House of Representatives passed the bipartisan “TAKE IT DOWN Act” with a 409-2 vote, following the Senate's unanimous approval in February. The legislation criminalizes the posting of nonconsensual sexually explicit images, including AI-generated deepfakes, and mandates that social media platforms remove such content within 48 hours upon a victim's request. First Lady Melania Trump supported the bill, emphasizing its importance in protecting youth from harmful online content. The act now awaits President Donald Trump's signature to become law.
However, several digital rights organizations have raised concerns about the potential implications of the “TAKE IT DOWN Act.” FoFS Senior Legal Fellow Ashkhen Kazaryan points out that the act “responds to real harms, but in the hands of a government increasingly willing to regulate speech, its broad provisions provide a powerful new tool for censoring lawful expression, monitoring private communications, and undermining due process.”
Mike Masnick, from Techdirt, criticizes the act as being unconstitutionally vague and prone to abuse, suggesting it could be weaponized for political censorship. In a recent speech to Congress, President Trump suggested that he would use the bill to protect himself as well, “because nobody gets treated worse than [he does] online, nobody.”
The Electronic Frontier Foundation (EFF) warns that the act could be exploited to suppress lawful speech since it lacks adequate safeguards against frivolous or bad-faith takedown requests. They highlight that the 48-hour removal requirement may pressure platforms to over-censor content to avoid penalties. Public Knowledge echoes these concerns, noting that the act's provisions could undermine encryption and privacy. They argue that the enforcement authority granted to the Federal Trade Commission is problematic, especially given concerns about the agency's independence under the current administration.
To learn more about the act, read this CBS article by Caitlin Yilek. For a critical analysis, we invite you to read the articles from Mike Masnick at Techdirt, Jason Kelley at EFF, and Shiva Stella at Public Knowledge.
X wants to overturn a political deepfakes ban in Minnesota
X is challenging the constitutionality of Minnesota’s ban on using deepfakes to influence elections and harm political candidates, arguing that the law violates the First Amendment. X also contends that Section 230 of the Communications Act, which shields platforms from liability for third-party content, preempts the state’s law.
Minnesota’s law criminalizes distributing deepfake videos, images, or audio if done knowingly or with reckless disregard for authenticity within 90 days of a nominating convention or after early voting begins. The material must be intended to harm a candidate or influence an election and be AI-generated or similarly produced to appear convincingly real.
X warns that although banning "deepfakes" may sound reasonable, the law would actually criminalize innocuous, election-related speech, including humor, and expose social media platforms to criminal liability for failing to censor such content. Rather than defending democracy, X argues, the law would erode it. X also notes that the courts recently blocked a similar deepfake law in California (read Jacob Mchangama’s piece at MSNBC on California’s law).
Content creator Christopher Kohls and GOP state Rep. Mary Franson, known for sharing AI-generated political parodies, had already challenged the Minnesota law before X.
The Minnesota attorney general’s office contends that deepfakes pose a serious and increasing threat to free elections and democratic institutions, that the law is a valid and constitutional way to address the issue, and that it includes key safeguards to protect satire and parody.
Despite X’s mixed record on defending free speech, laws like Minnesota’s raises concerns. Senior Research Fellow Jordi Calvet-Bademunt recently explained in Tech Policy Press that panic over AI and elections has fueled misguided legislation. Authorities should avoid outright bans on political deepfakes, and AI models should not be compelled to align with specific political viewpoints.
To learn more about the Minnesota law, read this Associated Press article by Steve Karnowski.
The Government of India is taking measures to tackle deepfakes
The Indian government has outlined new steps to crack down on AI-generated misinformation and deepfakes. The Minister of State for IT told Parliament that online platforms have been reminded of their due diligence obligations under the IT Rules, 2021, and advised to counter unlawful content, including malicious "synthetic media" and "deepfakes," by promptly removing harmful material. Unlawful content includes child abuse material, fake news, and posts that threaten national integrity or public order.
The government has stated that these measures aim to “ensure a safe, trusted, and accountable cyberspace.” However, given that India is categorized as “partly free” in Freedom House’s Freedom on the Net Index, and individuals are frequently investigated or arrested for their online activity, there are reasons to fear that this law could be abused.
To learn more about India’s new measures, read the official press release.
Trump Administration’s OMB Memos aimed to promote public AI adoption
The White House has released a series of new AI policies that mark a sharp departure from the prior administration’s more cautious approach. Central to this shift is a set of memoranda from the Office of Management and Budget (OMB)—M-25-21 and M-25-22—which aim to “facilitate responsible AI adoption to improve public services.” These policies focus on removing barriers to AI innovation and tracking AI adoption. The policies introduce a “high-impact” AI category to track use cases that could affect rights and safety. They also seek to promote more effective and efficient AI acquisition.
As Ellen P. Goodman analyzes at Tech Policy Press, in contrast to the previous administration’s risk-aware and equity-focused model, the Trump memos eliminate references to “responsibility,” “equity,” “bias,” and environmental concerns. There is also no requirement for agencies to use the NIST AI Risk Management Framework (RMF); instead, agencies are encouraged to define their own performance standards. This deregulatory move has raised concerns about diminished federal coordination and oversight, particularly as it sidelines NIST, a key institution in standard-setting and soft power diplomacy. The memos emphasize that the government “will no longer impose unnecessary bureaucratic restrictions” on the deployment of AI in the executive branch.
The guidance does hint at future plans for managing generative AI, promising forthcoming “playbooks,” but otherwise leaves the area largely unaddressed. Overall, the memos reflect a decisive pivot toward performance-first AI governance, prioritizing speed and innovation over centralized safeguards. While the Trump Administration positions this as a boost to American competitiveness, the reduced emphasis on risk and civil liberties has sparked debate over the long-term implications for public accountability and constitutional protections, especially as generative AI becomes more pervasive in federal operations.
To learn more about the newly released White House memos, read Ellen P. Goodman’s analysis at Tech Policy Press.
Russia exploits AI chatbots with mass-produced disinformation
Russia is systematically manipulating AI chatbots by flooding the internet with false information, aiming to influence their outputs on sensitive topics like the war in Ukraine. Researchers found that a significant part of the responses from leading chatbots repeated Russian propaganda, such as the false claim that the U.S. was producing bioweapons in Ukraine. The operation relies heavily on networks like Pravda, which generate thousands of fabricated articles daily aimed not at human readers but at AI crawlers that gather content for training and search-augmented systems.
The disinformation is further laundered through edits to Wikipedia pages and social media posts, increasing its visibility to AI models that prioritize those sources. Russia's efforts are increasingly automated and cost-effective compared to traditional troll farms, with similar tactics emerging from China. Investigations revealed that false narratives promoted by these networks are appearing in chatbot responses across multiple platforms, as companies push AI services to market with limited vetting of their information sources.
To learn more about Russia’s disinformation campaign, read Joseph Menn’s article in the Washington Post. Valentin Châtelet, from the Atlantic Council, shared similar findings in this piece.
OpenAI no longer sees mass manipulation and disinformation as a critical risk
OpenAI announced that it will no longer test its AI models for their potential to persuade or manipulate people before release, a major shift from its previous risk standards. Instead, the company will manage these risks through stricter terms of service, banning use in political campaigns and lobbying, and monitoring model usage after deployment. OpenAI also updated its "Preparedness Framework" to allow the release of models classified as "high risk," or even "critical risk" if competitors have released similar models, moving away from its earlier policy of withholding models deemed riskier than "medium."
This comes at a time when reports indicate that Google is prioritizing nimbleness over transparency. Its latest model, Gemini 2.5 Pro, was launched before the corresponding safety report was released. Although Google later published a report, it provided limited details.
To learn more about OpenAI’s changes, read Fortune’s article. More details on Google’s practices are available in TechCrunch’s first article and the update.
Rwanda hosts first global AI summit on Africa
Rwanda recently hosted the inaugural Global AI Summit on Africa, the first event of its kind dedicated solely to the continent’s AI future. Held in Kigali on April 3–4, 2025, the Summit welcomed over 1,000 attendees, including global leaders, African heads of state, tech giants, investors, and policy shapers. Themed “AI and Africa’s Demographic Dividend: Reimagining Economic Opportunities for Africa’s Workforce,” the summit marked a major milestone in aligning AI development with the African Union’s Agenda 2063.
Key among its outcomes was the launch of the Artificial Intelligence African Council, bringing together delegates from 40 countries to coordinate and drive inclusive AI innovation across the continent. The highlight of the event was the unveiling of the Africa Declaration on Artificial Intelligence, a vision for AI that centers ethics, inclusivity, and sustainable development. The declaration commits to fostering responsible AI ecosystems tailored to Africa, while positioning the continent as a global thought leader in AI governance. Furthermore, a $60 billion AI fund was launched at the event by African countries and international partners to develop critical AI infrastructure, including compute power, talent development, and energy.
To learn more about the Global AI Summit on Africa, read the summary by the UNDP.
European Commission unveils AI Continent Action Plan
The European Commission has unveiled the AI Continent Action Plan, aimed at establishing Europe as a global leader in artificial intelligence. Central to the plan is a €200 billion investment package, including €20 billion dedicated to constructing up to 45 AI giga-factories across the EU. This infrastructure upgrade aims to supercharge Europe’s AI capacity and ensure it competes on equal footing with other global tech powers.
The AI Continent Action Plan is built on five strategic pillars: 1) Building a large-scale AI data and computing infrastructure; 2) Increasing access to large and high-quality data; 3) Developing algorithms and fostering AI adoption in strategic EU sectors; 4) Strengthening AI skills and talents; and 5) Regulatory simplification. Through this framework, the Commission aims to cultivate an environment for responsible, competitive, and inclusive AI development rooted in European values.
To ensure transparency and broad engagement, the Commission has also launched two public consultations open until 4 June 2025. The first focuses on the Cloud and AI Development Act, while the second, Apply AI Strategy, seeks input on policy priorities, barriers to adoption, and further steps to ensure the full adoption of AI in strategic sectors. Stakeholders are encouraged to contribute to help shape the future of AI across the continent.
To learn more about the AI Continent Action Plan, read this brief by The Brussels Times.
Links to Additional News
Industry:
Activist Robby Starbuck Sues Meta Over AI Answers About Him (The Wall Street Journal).
Meta’s ‘Digital Companions’ will talk sex with users—even children (The Wall Street Journal).
AI versus free speech: Lawsuit could set landmark ruling following teen’s suicide (Fox 35 Orlando).
Meta’s ChatGPT competitor shows how your friends use AI (The Verge).
ChatGPT adds Washington Post content to growing list of OpenAI media deals (CNBC).
OpenAI would buy Google’s Chrome Browser, ChatGPT chief says (Bloomberg).
‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw (Wired).
Meta announces plans to train AI using public content in the EU (Meta).
Instagram tries using AI to determine if teens are pretending to be adults (Local10).
WhatsApp defends 'optional' AI tool that cannot be turned off (BBC).
OpenAI debuts its GPT-4.1 flagship AI model (The Verge).
Baidu launches new AI model amid mounting competition (Reuters).
Meta and Booz Allen partner on ‘Space Llama’ AI program with Nvidia and HPE (CNBC).
The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation (Meta).
‘Meta has stolen books’: authors to protest in London against AI trained using ‘shadow library’ (The Guardian).
Adobe releases ‘created without generative AI’ tag to label human-generated art (Fast Company).
OpenAI and Google Reject UK Government’s AI Copyright Proposal (TechRepublic).
Google could use AI to extend search monopoly, DOJ says as trial begins (Reuters).
Our investment in AI-powered solutions for the electric grid (Google).
Nvidia to mass produce AI supercomputers in Texas as part of $500 billion U.S. push (CNBC).
Government:
President Trump issues Executive Order Advancing Artificial Intelligence Education for American Youth (White House).
Trump Administration pressures Europe to ditch AI rulebook (Bloomberg).
American Public Submits Over 10,000 Comments on White House’s AI Action Plan (White House).
Former school athletic director gets 4 months in jail in racist AI deepfake case (AP).
UAE set to use AI to write laws in world first (Financial Times).
Generative AI providers see first steps for EU code of practice on content labels (MLex).
How the US government wants to rewrite EU’s code of practice for AI models (MLex).
Consultation: Commission Guidelines to Clarify the Scope of the General-purpose AI Rules in the AI Act (European Commission).
The Commission suggests that integrating DeepSeek into online platforms in the EU might result in breaching the Digital Services Act (European Parliament).
South Africa advocates for linguistic equity in AI at G20 (Zawya).
El Salvador works with Nvidia to develop sovereign AI infrastructure (CoinTelegraph).
Creating and sharing deceptive AI-generated media in the furtherance of a crime is now a crime in New Jersey (AP News).
Colorado lawmakers move to ban sexually exploitative images, video created with artificial intelligence (The Colorado Sun).
House Select Committee Publishes Report on DeepSeek, as Commerce Imposes New AI Chip Export Restrictions (JD Supra).
Trump’s AI infrastructure plans could face delays due to Texas Republicans (The Guardian).
Generative AI is learning to spy for the US military (MIT Technology Review).
California Supreme Court demands State Bar answer questions on AI exam controversy (Los Angeles Times).
James Bulger's mum seeks AI law to curb clips of murder victims (BBC).
UK creating ‘murder prediction’ tool to identify people most likely to kill (The Guardian).
Beijing Intellectual Property Court: Artificial Intelligence Models Can Be Protected with the Anti-Unfair Competition Law, Not the Copyright Law (The National Law Review).
The Future of Free Speech in Action
FoFS Executive Director Jacob Mchangama will speak at the Copenhagen Democracy Summit 2025 on May 13-14. Jacob will discuss the impact of AI legislation in Europe and the United States on free speech, as well as the role of generative AI systems in today’s public sphere.
Senior Research Fellow Jordi Calvet-Bademunt, will be a featured speaker at AI For Prosperity, an event organized by the U.S. Department of State, DCN Global, and World Learning in Buenos Aires on May 18-20. Jordi will explore the risks and opportunities that AI regulation presents for free expression and creativity in today's evolving landscape.
*A big thank you to our intern, Joshua Rosen, for all his help this semester on the AI project and this newsletter.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.