.exe-pression: February 2025
A Monthly Newsletter on Freedom of Expression in the Age of AI
TL;DR
The Paris AI Action Summit’s Statement on Inclusive and Sustainable AI for People and Planet had broad international support with the notable exceptions of the U.S. and the UK. U.S. Vice President JD Vance’s speech warned against excessive regulation of AI and claimed that the administration is committed to “free speech.”
The EU AI Act’s provisions on “unacceptable risk” in AI systems and AI literacy requirements have taken effect, with the Commission issuing non-binding guidelines on how to define “AI system” and on “unacceptable risk” prohibitions. The European Commission has withdrawn the AI Liability Directive due to a lack of foreseeable agreement.
The U.S. Senate passed the TAKE IT DOWN Act, a bill that would criminalize non-consensual intimate imagery (NCII), including deepfakes, and mandate that social media remove such content within 48 hours. Free speech advocates have raised alarms regarding the removal obligation.
The UK will introduce new AI-related sexual abuse offences, criminalizing AI-generated child sexual abuse material (CSAM) and AI “paedophile manuals,” alongside measures targeting CSAM websites and granting an enforcement authority the power to compel individuals to unlock their devices based on “reasonable suspicion.”
South Korea removed DeepSeek, a Chinese AI chatbot, from app stores in the country due to data protection concerns.
Main Stories

The Paris AI Action Summit Redefines Global AI Agenda
France hosted the AI Action Summit, focusing on multilateral cooperation in AI innovation, investment, cultural impact, environmental sustainability, and inclusivity. It was the third global AI summit after ones held in the UK in 2023 and South Korea in 2024. The event brought together over 1,000 participants and several dozen heads of state.
A key outcome was the “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet,” signed by more than 60 countries and regional blocs—notably excluding the U.S. and the UK. Meanwhile, U.S. Vice President JD Vance cautioned Action Summit attendees against “excessive regulation” of AI. He reaffirmed that the “Trump Administration will ensure that AI systems developed in America are free from ideological bias and never restrict [U.S.] citizens’ right to free speech.”
Jovan Kurbalija, from Diplo, explains that the Paris Action Summit shifted the global conversation on AI, “moving away from speculative long-term risks and toward immediate, tangible issues like innovation, jobs, and public good.” Unlike previous AI forums, this summit steered away from a safety-focused agenda, focusing on short-term, existing risks. For a deeper analysis of the impact of the Paris AI Action Summit, read Diplo’s brief.
In a separate event, the Munich Security Conference, Vice President Vance also claimed that Europe has gone too far in restricting speech. The Future of Free Speech Executive Director Jacob Mchangama recently argued that while Europe has a free speech problem, so does the Trump administration.
European Commission Withdraws the AI Liability Directive
The European Commission abandoned the EU AI Liability Directive proposal, arguing that there was “[n]o foreseeable agreement.” The Commission will now assess “whether another proposal should be tabled or another type of approach should be chosen.” The AI Liability Directive was initially proposed to “lay down uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems.”
In late January, an industry coalition called for its withdrawal “to avoid further legal uncertainty and support Europe’s competitiveness.” The Commission’s decision follows criticism of the EU’s pro-regulatory approach to AI and aligns with the global shift toward pro-innovation policies, as seen at the Paris AI Action Summit. However, the European Parliament’s Internal Market and Consumer Protection Committee has voted to keep working on the proposal.
To learn more about the Commission’s decision to withdraw the AI Liability Directive, read IAPP’s brief.
EU AI Act’s Rules on Prohibited AI Practices Have Taken Effect
The EU AI Act’s provisions on “unacceptable risk” in AI systems became applicable on February 2. These provisions ban AI systems and practices “deemed unacceptable due to their potential risks to European values and fundamental rights.” This includes, for instance, the use of AI systems that deploy subliminal or manipulative techniques, predict the risk of criminal behavior of individuals, or infer emotions in the workplace or education. The European Commission also released non-binding guidelines to help stakeholders comply with the AI Act’s provisions on “unacceptable risk.”
The provisions on AI literacy requirements also took effect. They require providers and deployers of AI systems to ensure “to their best extent” that staff dealing with AI systems for business have sufficient skills to deploy AI systems and awareness about the risks and opportunities these systems present. The AI Office is expected to issue formal guidelines soon, but it has already published a Living Repository to Foster Learning and Exchange on AI Literacy.
Additionally, the Commission issued guidelines on the definition of AI systems, emphasizing that these guidelines “are designed to evolve over time and will be updated as necessary.” Like the compliance guidelines, these non-binding drafts assist developers in determining whether a software system qualifies as an AI system under the AI Act.
To learn more about the AI Act’s requirements coming into force, read Mayer Brown’s brief.
The TAKE IT DOWN Act, An Anti-Deepfake Law, Passed in The U.S. Senate
The U.S. Senate passed the TAKE IT DOWN Act (S. 146). This bill would make it a criminal offense to publish NCII, including AI-generated NCII (“deepfake revenge pornography”), and mandates that social media platforms and similar websites establish procedures to remove such content within 48 hours of receiving a victim’s notice. Advocates for abuse victims praised the move, but digital rights groups warn that the mandated 48-hour notice-and-takedown system is overbroad and threatens lawful speech. They argue the bill “is likely unconstitutional and will undoubtedly have a censorious impact on users’ free expression” since attempts to comply could result in the removal of satire, journalism, and other protected content. The bill is currently awaiting action in the House.
To learn more about the bill and its implications, read Kaylee Williams’ analysis in Tech Policy Press.
UK Targets AI-Generated Child Sex Abuse Images with New Laws
The United Kingdom will be the first country to create “new AI sexual abuse offences to protect children from predators generating AI images,” according to the Home Office. The new laws would make it illegal to possess, create, or distribute AI tools designed to generate child sexual abuse material and for anyone to possess AI “paedophile manuals” that teach people how to use AI to sexually abuse children. Additionally, the Home Office will introduce two other related offenses.
Additional measures criminalize predators who run websites designed for CSAM or grooming and allow Border Force to compel an individual to unlock their phone based on reasonable suspicion that they pose a sexual risk to children. These measures will be introduced as a part of the Crime and Policing Bill before Parliament in the spring. While these rules are undoubtedly well-intentioned and concern a very serious issue, they must be scrutinized to ensure that they respect the fundamental right to freedom of expression and are not excessively broad and vague.
To learn more about the UK’s new AI sexual abuse offenses, read the BBC’s brief.
South Korea Removes DeepSeek from App Stores in The Country
DeepSeek, a Chinese AI chatbot that quickly gained global popularity, has been removed from South Korean app stores by the country’s Personal Information Protection Commission (PIPC) over data protection concerns. The commission cited a lack of transparency around third‐party data transfers and excessive personal information collection, leading to a temporary ban on new downloads until the app meets stricter privacy standards. Italy also blocked DeepSeek for privacy reasons in January, and authorities in Australia, Taiwan, and several U.S. states have banned DeepSeek from government devices over data privacy and censorship concerns.
To learn more about the restrictions and security concerns, read the brief by Charles Mok, from the Global Digital Policy Incubator.
Links to Additional News
Industry:
Apple teams up with Alibaba to bring AI features for iPhones in China (Reuters).
Google and Meta are leading the charge against a “code of practice” governing tools like Gemini, ChatGPT and Llama (Politico).
Google erases promise not to use AI technology for weapons or surveillance (CNN).
Grok 3, X’s AI model, appears to have briefly censored unflattering mentions of Trump and Musk (TechCrunch).
Meta says it may stop the development of frontier AI systems it deems too risky (TechCrunch).
OpenAI tries to ‘uncensor’ ChatGPT (TechCrunch).
OpenAI is facing a copyright lawsuit in India from prominent players in the music industry (The National Law Review).
Scale AI and the U.S. AI Safety Institute are partnering to provide third-party evaluation in developing AI testing criteria and assessing AI models (Scale AI).
Guardian Media Group announces strategic partnership with OpenAI that will provide access to Guardian’s editorial content through ChatGPT (The Guardian).
Thomson Reuters Wins First Major AI Copyright Case in the US (Wired).
Government:
The White House seeks public input on the Development of an Artificial Intelligence Action Plan with comments due March 15 (The Federal Register).
The U.S. AI Safety Institute could face big cuts, with hundreds of staffers potentially laid off (TechCrunch).
In the U.S., two recent federal court rulings on AI copyright infringement and fair use trended in favor of copyright holders and against AI companies (Transparency Coalition).
The European Parliament Research Service published a brief on “Algorithmic discrimination under the AI Act and the GDPR” (European Parliament).
EU launches InvestAI initiative to mobilize €200 billion (European Commission).
France announced €109 billion investment into AI projects and data centers over the next several years (Le Monde).
UK’s AI Safety Institute becomes the AI Security Institute, focusing on AI cybersecurity risks and not “bias or freedom of speech” (UK Government).
UAE plans to launch new DeepSeek-inspired AI models, according to a senior official (AFP).
Canada and Japan sign the Council of Europe’s first-ever global treaty on AI (Council of Europe).
The Future of Free Speech in Action
The Future of Free Speech hosted a RightsCon 2025 panel on “T&S in Gen AI: Agreeing on Principles for Freedom and Safety,” featuring panelists Sarah Shirazyan (Meta, Stanford), David Evan Harris (UC Berkeley), and Sayash Kapoor (Princeton), with Jordi Calvet-Bademunt (The Future of Free Speech) moderating.
Senior Research Fellow Jordi Calvet-Bademunt also spoke at the 2025 Tennessee Campus Civic Summit at the University of Tennessee Chattanooga, discussing the implications of AI for democracy.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.