.exe-pression: November - December 2025
A newsletter on Freedom of Expression in the Age of AI
TL;DR
President Trump Signs “One-Rule” AI Order Targeting State Laws: The Trump administration has issued an executive order directing federal agencies to challenge state AI regulations and advance a “minimally intrusive national standard,” empowering the Attorney General, FTC, and other agencies to move against laws that require AI models to alter “truthful outputs” or impose disclosure mandates seen as violating the First Amendment — potentially preempting state rules on political deepfakes, algorithmic transparency, or other speech-related regulation.
European Parliament Pushes for EU-Wide Age Limits on AI and Social Media: The European Parliament backed a non-binding call for a uniform under-16 restriction on social media and AI chatbots, raising concerns about restricting minors’ access to information and online expression.
EU Proposes Digital Omnibus Regulation: The European Commission unveiled a draft Digital Omnibus Regulation updating transparency, reporting, and content moderation rules for online platforms. The law aims to reduce compliance burdens and harmonize requirements across the EU, but could still incentivize over-moderation and influence what users feel safe posting.
South Korea Releases Draft Enforcement Decree for AI Basic Act: The Ministry of Science and ICT unveiled rules for high-performance AI, generative content labeling, and industry support, opening a public consultation ahead of the Act’s January 2026 implementation.
Turkey Proposes AI Legal Framework With Liability and Takedown Rules: A draft law would define “AI system,” impose developer and user liability, mandate labeling and rapid takedowns of harmful content, and grant authorities broad powers to block AI outputs — raising risks of over-censorship and chilling lawful expression.
Brazil Proposes National AI Governance Framework: President Lula has sent a bill to Congress that would create a coordinated AI governance system led by the data protection authority, with binding transparency standards and sectoral oversight — raising concerns about over-compliance that could indirectly chill journalistic, artistic, and political uses of AI.
Rights Groups Demand Limits on Facial‑Recognition Use: Civil‑liberties organizations call on DHS to suspend ICE’s Mobile Fortify app and highlight NYPD’s biased use of facial recognition, warning that current practices risk misidentification, target marginalized communities, and could chill lawful expression.
TikTok Adds Controls to Limit AI Content Amid Surge of Unlabeled Deepfakes: TikTok has introduced new settings to reduce or opt out of AI-generated videos after research found billions of views for unlabeled synthetic content, including misleading political and sexualized posts.
Major stories
» President Trump Signed a “One Rule” AI Executive Order Challenging States’ AI Laws
President Donald Trump has signed an executive order directing federal agencies to challenge state AI regulations and push toward a minimally burdensome national standard, after lawmakers dropped efforts to embed preemption language in a must-pass defense bill.
Details:
President Trump signed Ensuring a National Policy Framework for Artificial Intelligence, an executive order intended to prevent a patchwork of 50 different state AI regulatory regimes from impeding AI innovation.
The order directs the Attorney General to create an AI Litigation Task Force within 30 days to challenge state laws viewed as conflicting with federal policy, including under constitutional and preemption arguments.
Within 90 days, the Secretary of Commerce must evaluate state AI laws and identify those that “require AI models to alter their truthful outputs” or compel disclosures that would violate the First Amendment or other constitutional rights.
The FTC must issue a policy statement clarifying when state laws that require changes to truthful AI outputs are preempted by the FTC Act’s bar on deceptive practices.
The FCC is instructed to consider a federal reporting and disclosure standard for AI models that would supersede conflicting state requirements.
The Secretary of Commerce must also outline when states may remain eligible for Broadband Equity, Access, and Deployment (BEAD) funding, withholding certain funds from states with “onerous AI laws,” and agencies must review grant programs for potential similar conditions.
Background:
Congressional efforts to secure preemption via the must-pass defense bill were recently formally abandoned: House Majority Leader Steve Scalise stated that the language would not be included. “If you can add [preemption language], that’s great, but that wasn’t the best place for this to fit.”
Free Speech Implications:
Several states have pursued AI regulations at odds with free speech principles, such as laws banning political deepfake passed in California and Minnesota.
However, the emphasis on “truthful outputs,” rather than on core First Amendment grounds, raises questions about whether this effort is aimed more at advancing specific viewpoints favored by the government than at providing a principled protection of free speech.
» European Parliament Pushes for EU-Wide Minimum Age to Access AI Chatbots & Social Media
The European Parliament approved a non-binding resolution recommending that the Digital Services Act (DSA) framework be supplemented with a uniform EU-wide minimum age of 16 for access to social-media platforms, video-sharing sites, and AI chatbots/companions.
Details:
The proposal would allow individuals aged 13–16 to access those services only with parental consent, and would entirely bar under-13s from access.
Parliament calls for restrictions on addictive design features for minors, the addressing of AI tools capable of generating fake or inappropriate content, and banning non-compliant websites.
The resolution has no legal force and expresses Parliament’s position on the matter.
Free Speech Implications: This resolution aligns with the Australian government’s ban of under-16s from accessing social media. The resolution could carry negative impacts on minors’ access to information and expression online.
» EU Unveils “Digital Omnibus” Proposal to Streamline AI and Digital Rules
The European Commission released its Digital Omnibus proposal, a wide-ranging update to EU digital laws meant to reduce compliance burdens and align deadlines across data, AI, and cybersecurity frameworks.
Details:
The package is part of a broader simplification agenda aiming to make digital rules “clearer and more coherent” for businesses.
For AI, the Omnibus would delay several EU AI Act deadlines, including postponing requirements for high-risk AI systems until harmonized standards are in place and pushing the transparency obligation for AI-generated content to February 2027.
It would also streamline conformity assessments, expand flexibilities for SMEs and mid-caps, and centralize elements of AI oversight within the EU AI Office to reduce duplication across member states.
The Commission argues the changes will help companies comply more effectively, but rights groups warn the package could dilute safeguards embedded in existing data-protection and AI-governance rules.
Free Speech Implications:
Clearer requirements on transparency and reporting could help platforms justify decisions to users, supporting accountability and potentially reducing inconsistent content removal.
While bans on political deepfakes raise significant free speech concerns, the discussion surrounding disclosure—its implications for free speech, effectiveness, and feasibility—remains ongoing.
» South Korea Releases Draft Enforcement Decree for AI Basic Act
South Korea’s Ministry of Science and ICT released the draft enforcement decree for the AI Basic Act, its first comprehensive AI legislation, and opened a public consultation period. The decree clarifies standards and obligations under the Act.
Key Measures:
Establishes standards for support projects aimed at fostering the AI industry.
Designates institutions responsible for implementing national AI policy.
Defines “high-performance AI model” based on computational power, and such models are subject to additional safety regulations and oversight.
Introduces disclosure and labeling requirements for “deepfakes” and other generative content to ensure users are properly informed.
Confirms exemptions for AI used exclusively for national defense or security purposes.
The decree reflects input from the National AI Strategy Committee, industry stakeholders, and experts, and is expected to be finalized before the AI Basic Act takes effect in January 2026.
Free Speech Implications: Disclosure and labeling requirements for generative content may help users understand AI outputs, but compliance obligations could encourage cautious or conservative deployment, potentially affecting expressive, artistic, or political uses of AI.
» Turkey Proposes Broad AI Legal Framework With Liability, Takedown, and Labeling Rules
A draft law submitted to Turkey’s Grand National Assembly would introduce a legal definition of “AI system,” clarify liability for users and developers, and impose new obligations and enforcement mechanisms across multiple statutes, marking a significant expansion of state control over AI‑generated content and technology.
Details:
The bill would amend Turkey’s Internet Law (Law No. 5651), Turkish Penal Code No. 5237, and other laws to define “AI system” as software or models that autonomously or semi‑autonomously process data to generate outputs.
It creates liability for individuals who direct an AI to commit conduct that constitutes a crime, treating them as perpetrators, and increases penalties for developers whose design or training of AI enables such acts.
Content generated by AI that violates personal rights, endangers public security, or involves deepfakes must be blocked or removed within six hours, with both content providers and AI developers jointly liable for compliance.
AI‑generated material must carry a visible label with administrative fines for failure to disclose and potential access blocking orders for systematic non‑compliance.
The draft would empower the Information and Communication Technologies Authority (BTK) to impose fines and issue urgent blocking orders for AI content that threatens public order or election security.
Free Speech Implications:
The strict takedown timeline and joint liability expose platforms and developers to risk, likely encouraging over‑removal of user content to avoid penalties — including lawful political or cultural speech.
Mandatory labelling and rapid enforcement may stifle creative, journalistic, and satirical uses of AI, particularly where expressive nuance or context matters.
Broad emergency blocking power over “threats to public order or election security” — undefined in the bill — could enable arbitrary enforcement against dissenting voices, chilling public debate and minority expression.
In July, a Turkish court partially blocked access to xAI’s chatbot Grok, which the court found responsible for generating content “insulting” to both President Erdogan and modern Turkey’s founder, Kemal Ataturk, as well as religious values.
» Brazil Sends Sweeping AI Governance Bill to Congress
President Luiz Inácio Lula da Silva has submitted a comprehensive AI governance bill to Brazil’s Congress. Bill No. 6237/2025 would create the National System for the Promotion and Regulation of Artificial Intelligence, bringing together the Brazilian Council for Artificial Intelligence (CBIA), the National Data Protection Authority (ANPD), sectoral regulators, and advisory bodies involving civil society and experts.
Details:
The bill establishes a coordination framework for AI policy and oversight across government, rather than setting detailed content or speech rules.
CBIA would define national AI policy and strategic priorities, while ANPD would act as the central coordinator, issuing binding transparency standards and general rules.
Sectoral regulators would oversee AI uses within their domains under these cross-cutting norms.
While the proposal references risk-based oversight, specific high-risk classifications and obligations remain part of Brazil’s broader AI legislative debate.
Free Speech Implications:
Vague transparency requirements and sectoral oversight of high-risk systems may encourage over-compliance, affecting journalistic, artistic, and political uses of AI.
To learn more about Brazil’s AI regulations and freedom of expression, read Carlos Affonso Souza’s chapter from our report — That Violates My Policies: AI Laws, Chatbots, and the Future of Expression.
» Civil Liberties Groups Demand End to Facial-Recognition Use by ICE & NYPD
Civil-liberties organizations are calling for an immediate halt to two separate U.S. law-enforcement agencies’ use of mobile facial-recognition technology (FRT)— citing documented abuses that threaten privacy, protest rights, and free expression.
ICE’s Mobile Fortify App — National Biometric Surveillance on the Street
Civil-liberties organizations, including the Electronic Frontier Foundation (EFF), Asian Americans Advancing Justice and the Project on Government Oversight, are urging the Department of Homeland Security (DHS) to immediately stop Immigration and Customs Enforcement (ICE)’s use of “Mobile Fortify,” a handheld FRT that allows agents to take a photo during an encounter and run it across DHS databases, pulling immigration files and other identity data.
The coalition warns the tool enables warrantless, suspicionless biometric scanning of immigrants, U.S. citizens, and bystanders, with no public oversight.
They also cite DHS’s history of biometric misidentification as evidence that the tool risks wrongful stops or detentions, with data potentially retained long-term and shared broadly.
NYPD’s Documented Use of FRT Against Protesters and Communities of Color
Newly released New York Police Department (NYPD) records obtained by Amnesty International and the Surveillance Technology Oversight Project show extensive and discriminatory use of FRT, including repeated targeting of protesters, people of color, and individuals engaged in constitutionally protected activity.
The records show that NYPD analysts routinely used FRT in ways that risked racially biased identification, including flagging people based on facial features and hairstyles stereotypically associated with communities of color.
Images used for matching often came from social media, protest footage, or low-quality surveillance feeds, increasing the likelihood of misidentification.
Free Speech Implications:
EFF warns that pervasive, on-the-spot biometric scanning could chill participation in protests, public gatherings, and other expressive activities, especially in communities already subject to heightened immigration enforcement.
Because both ICE and NYPD deploy facial recognition in public and street-level contexts — often without consent, transparency, or oversight — the technologies pose a serious threat to anonymity, protest, and free expression. For marginalized communities, the racial and ethnic biases documented in facial-recognition tools add a further layer of risk.
» TikTok Addresses Surge of AI-Generated Content With New User Controls
TikTok rolled out new settings that let users reduce or opt out of AI-generated content in their feeds, as new research shows the platform is being flooded with unlabeled synthetic media, including anti-immigrant narratives and sexualized content, which has reached billions of views.
Details:
TikTok announced that users can now “tune down” or opt out of AI-generated videos in their For You Page and will see clearer “AI-made” labels on realistic synthetic media.
The update comes amid mounting pressure after AI Forensics, a Paris-based non-profit, found hundreds of AI-driven accounts pushing nearly 43,000 AI-generated posts per month, much of it unlabeled.
The report identified content with anti-immigrant messaging, sexualized depictions, and misleading “news-style” posts — collectively garnering billions of views.
TikTok has labeling policies but, in practice, enforcement remains inconsistent and many AI-generated videos remain undetected.
Free Speech Implications: New user controls will give individuals more autonomy over their feeds, but they also raise concerns that platforms could algorithmically shadow-ban AI-generated or AI-assisted artistic or political content.
The Future of Free Speech in Action
The Future of Free Speech convened an international roundtable in Sweden, in collaboration with Örebro University, to examine how artificial intelligence is reshaping freedom of expression. Building on our flagship report That Violates My Policies: AI Laws, Chatbots and the Future of Expression, the event brought together leading academics, regulators, companies, and civil-society experts to explore how generative AI is transforming creativity, influencing democratic dialogue, and redefining the boundaries of lawful speech.
Senior Research Fellow Jordi Calvet-Bademunt delivered a session on AI and freedom of expression to employees of Orange in celebration of Human Rights Day. Orange is one of France’s largest telecommunications operators, and the session explored the implications of recent public policies globally and corporate practices in the AI space.
Jordi will also discuss the EU AI Act and the Digital Services Act, and their implications for freedom of expression, in conference GenAI & Creative Practices: Past, Present, and Future at the University of Amsterdam on December 18.
Executive Director Jacob Mchangama participated in a Lawfare podcast —Scaling Laws: AI Chatbots and the Future of Free Expression with Jacob Mchangama and Jacob Shapiro — discussing the findings of our report and AI and free speech broadly.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.
Hirad Mirami is a research assistant at The Future of Free Speech and a student at the University of Chicago studying economics and history.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.










The Trump EO framing around "truthful outputs" is an interesting choice tbh. It seems more focused on preventing content moderation than protecting genuine free expression, which makes me skeptical about the underlying motives. I've wokred with compliance teams before and teh patchwork of state laws is legitimately messy, but federal preemption this broad could also undermine states trying to address real harms like political deepfakes. The Brazil approach with sectoral oversight actually feels more balanced bc it avoids the binary choice between total freedom and heavy-handed control.