.exe-pression: March - April 2026
A Newsletter on Freedom of Expression in The Age of AI
TL;DR
DOJ Joins Federal Challenge to Colorado’s AI Discrimination Law: The U.S. Department of Justice intervened in a federal lawsuit filed by Elon Musk’s xAI against Colorado’s AI discrimination law, marking the first challenge from the federal government to a state AI law in court. The lawsuit argues that the legislation compels developers to embed ideological judgments into AI model outputs and violates the Equal Protection Clause.
White House Releases Legislative AI Blueprint: The Trump administration issued a national AI policy framework urging Congress to centralize AI oversight and preempt state regulation in order to avoid government censorship. Whether that framing actually safeguards against censorship or merely moves regulatory powers remains an open question.
Washington State Enacts AI Disclosure and Chatbot Safety Laws: Washington Governor Ferguson signed three AI bills requiring watermarks on AI-generated media, disclosure and safety obligations for companion chatbots, and a new private right of action for unauthorized deepfake uses of a person’s voice or likeness — with critics warning that litigation-driven enforcement may create compliance uncertainty.
Anthropic Sues Trump Administration Over Pentagon Blacklisting: Anthropic is challenging its designation as a supply-chain risk, alleging the move was direct retaliation over its refusal to allow the Pentagon to use Claude for mass domestic surveillance or autonomous weapons targeting.
Court Refuses to Block California’s AI Training Data Transparency Law: A federal court declined to halt California’s law requiring AI companies to publicly disclose summaries of their training data, rejecting xAI’s argument that the requirement violates the First Amendment and constitutes an unconstitutional taking of trade secrets.
Florida Opens Criminal Probe Into ChatGPT Outputs: Florida’s attorney general launched a criminal investigation into OpenAI over allegations that ChatGPT advised the FSU shooting suspect on weapon choice and targeting. The company faces a separate lawsuit from the families of victims of the Tumbler Ridge school shooting in British Columbia, who claim it failed to alert police after internally flagging the shooter’s account months before the attack.
China Rolls Out National AI Ethics Review Framework: China introduced its first comprehensive AI ethics review regime, requiring pre-deployment approval for AI systems capable of shaping public opinion or social mobilization — placing censorship upstream of any user interaction, with a government-maintained list of high-risk activities that can be updated without notice or challenge.
India’s AI Content Rules Drive Wave of Takedowns: India’s amended IT Rules have triggered documented removals of journalist and satirist content, while a new draft amendment would require platforms to treat informal ministerial advisories as binding law — raising serious constitutional concerns about overbroad speech regulation.
Philippines Launches Multi-Agency Deepfake Initiative: The Philippines’ government formalized a whole-of-government response to AI-generated disinformation and deepfakes, combining legal enforcement with platform pressure. Now, legitimate dissent could be criminalized under laws broad enough to cover content “inciting disobedience to lawful authority.”
Meta Faces Senate Scrutiny Over AI Surveillance Glasses: U.S. senators pressed Meta over plans to add facial recognition to its Ray-Ban glasses, warning the technology could identify people at protests without their knowledge. Civil society groups, including the ACLU and 75 organizations, have also called on Meta to halt the plans entirely, describing them as a threat to anonymous public life.
African Governments Expand AI-Driven Surveillance: Eleven African governments have spent at least $2 billion on Chinese-built AI surveillance infrastructure to reduce crime, despite researchers finding no evidence that it will do so.— Now, researchers claim it is instead being used to monitor political opponents, journalists, and activists without adequate legal safeguards.
Major Stories
» DOJ Joins Federal Challenge to Colorado’s AI Discrimination Law
Elon Musk’s xAI sued to block Colorado’s Artificial Intelligence Act, and the Department of Justice moved to intervene—marking the first time the federal government has challenged a state AI law in court.
Details:
xAI’s complaint argues that building an AI model is a form of protected speech under the First Amendment and that compelling developers to redesign systems to avoid disparate demographic outcomes amounts to government-compelled expression.
The DOJ, joining through its Civil Rights Division, argues the law also violates the Equal Protection Clause by requiring companies to mitigate disparate impacts while simultaneously carving out an exemption for algorithms designed to advance “diversity” or “redress historical discrimination.”
The same afternoon the DOJ intervened, Colorado’s Attorney General agreed to voluntarily halt enforcement of the law pending resolution of the litigation, with the state legislature facing a narrow window to pass an amended version before the current text is either struck down or takes effect on June 30.
Free Expression Implications: The case tests whether AI model design constitutes protected speech and whether anti-discrimination mandates amount to compelled expression. The DOJ argues that a state anti-discrimination law itself distorts expressive outputs. The outcome will shape whether states can mandate how AI systems weigh protected characteristics, and how far First Amendment doctrine extends into AI governance.
For insight into Colorado’s AI Act, see the US chapter of FoFS’s report: “That Violates My Policies: AI Laws, Policies, and the Future of Expression.” The Act regulates “high-risk” AI systems, but exempts chatbots governed by an “accepted use policy” prohibiting “discriminatory or harmful” content, leaving these terms undefined. This could incentivize companies to adopt content restrictions that may sweep in protected speech to avoid liability.
» White House Releases National AI Policy Blueprint
The White House released its National Policy Framework for Artificial Intelligence, outlining nonbinding legislative recommendations to guide Congress toward a unified federal approach to AI regulation and preemption of state laws.
Details:
The Framework is organized around seven pillars: child protection, AI infrastructure, intellectual property, censorship and free speech, innovation, workforce, and federal preemption of state AI laws.
The “Preventing Censorship and Protecting Free Speech” calls on the government to protect free speech and the First Amendment and on Congress to prohibit federal agencies from coercing AI providers to ban or alter content based on partisan or ideological agendas, and to provide redress mechanisms where the government attempts to censor expression on AI platforms.
On state preemption, the Framework recommends prohibiting state laws that “impose undue burdens” on AI development.
The Framework’s release was met with immediate pushback from the other side of the aisle: Representative Beyer and Democratic colleagues introduced the GUARDRAILS Act, which would nullify the December AI preemption executive order and prohibit the use of federal funds to implement it.
Free Expression Implications: The Policy Framework’s reference to defending the First Amendment is welcome. It is also encouraging that, unlike recent Executive Orders, the framework does not emphasize neutrality or “truth-seeking.” At the same time, previous statements and Executive Orders framed in culture-war terms counsel caution. The Framework’s age-assurance recommendations, which condition access to lawful speech on identity disclosure, also give pause. For additional insight, see our statement here.
» Washington State Enacts AI Disclosure and Chatbot Safety Laws
Washington Governor Bob Ferguson signed three AI bills in March 2026 covering AI content disclosure, companion chatbot safety, and a new right to sue over unauthorized AI-generated uses of a person’s voice or image.
Details:
House Bill 1170 requires large AI companies to identify when images, video, or audio are created or substantially altered by AI — through watermarks or embedded metadata — and to offer detection tools. House Bill 2225 requires companion chatbots to disclose they are not human at the start of every conversation and every three hours, with hourly reminders for minors, and bans manipulative engagement tactics designed to deepen emotional attachment. A separate law allows individuals to sue over unauthorized deepfake uses of their voice or likeness.
Critics warn that allowing enforcement standards to be defined through private lawsuits rather than agency rulemaking may create uncertainty for responsible actors trying to comply in good faith. Ferguson signed the bill without changes.
Free Expression Implications: Disclosure and duty of care requirements can indirectly shape expressive content. Providers facing liability may limit nuanced discussion of sensitive topics — mental health, sexuality, identity — to avoid exposure. The concern is particularly acute for younger users: those who may most need reliable, non-judgmental information on difficult subjects are precisely those for whom the new restrictions apply most stringently.
» Anthropic Sues Trump Administration Over Pentagon Blacklisting, Raising First Amendment Questions
Anthropic filed a lawsuit in California in March 2026, challenging the Trump administration’s designation of the company as a “supply-chain risk” — effectively barring government agencies from working with Anthropic after the company refused to allow the Pentagon to use Claude for mass domestic surveillance or fully autonomous weapons targeting. Anthropic argues the designation was unconstitutional retaliation for its First Amendment-protected advocacy on AI safety.
In a parallel proceeding, Anthropic filed in the D.C. Circuit, challenging a separate government-wide supply-chain designation under the Federal Acquisition Supply Chain Security Act, which effectively warned all federal contractors to avoid using Anthropic’s products in defense work.
Details:
In California, a federal judge granted Anthropic a preliminary injunction on March 26, finding the company likely to succeed on its First Amendment arguments and other claims. She called the government’s rationale “Orwellian” and held that the measures looked like punishment for Anthropic’s public criticism rather than a genuine security response. The ruling temporarily blocks the government’s actions, with the government’s appeal now before the Ninth Circuit.
At the D.C. Circuit, a three-judge panel denied Anthropic’s emergency stay of the separate FASCSA designation on April 8 — explicitly declining to reach the First Amendment merits, and finding that Anthropic’s harms, while real, were primarily financial rather than constitutional in nature. The ACLU and CDT filed an amicus brief in support of Anthropic, arguing its advocacy for AI guardrails is constitutionally protected.
Free Expression Implications: The case raises significant and unresolved questions, notably whether and how ethical limits implemented by AI companies in their models can be enforced against government customers.
» Court Refuses to Block California’s AI Training Data Transparency Law
A California federal court declined to halt enforcement of California’s AI training data transparency law in March 2026, denying xAI’s request for a preliminary injunction in X.AI LLC v. Bonta — leaving in place a requirement that generative AI companies publicly post summaries of the datasets used to train their systems.
Details:
California’s AB 2013 requires developers of generative AI systems available in that state to disclose the sources and owners of training datasets, whether they contain personal data or copyrighted material, approximate dataset size, and whether data was purchased or licensed. It does not require disclosure of proprietary model weights or system architecture. OpenAI and Anthropic have already posted compliant disclosures.
xAI raised three constitutional challenges: that the law compels disclosure of trade secrets, constitutes an unconstitutional taking of property without compensation, and violates the First Amendment by compelling speech. Judge Bernal denied the injunction, finding xAI had not shown it was likely to succeed on any of these claims — but the ruling addresses only the threshold for emergency relief, not the law’s ultimate constitutionality. The case proceeds on the merits.
Free Expression Implications: The case is still ongoing — the court’s denial of a preliminary injunction leaves the constitutional questions unresolved on the merits. xAI’s First Amendment argument, if ultimately accepted, could mean that decisions about what data to use to shape an AI system’s outputs constitute protected expression that the government cannot compel disclosure of.
» Florida Launches Criminal Investigation Into ChatGPT Outputs
Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI, alleging ChatGPT advised the man accused of a mass shooting at Florida State University on what ammunition to use, what time of day to strike, and where on campus to find the most people.
Details:
Uthmeier sent subpoenas to OpenAI requesting its policies on responding when users make threats to harm others, and said at a press conference: “If it was a person on the other end of that screen, we would be charging them with murder.” The criminal investigation follows a civil inquiry announced earlier the same month.
OpenAI has disputed the characterization, stating ChatGPT “provided factual responses to questions with information that could be found broadly across public sources” and did not encourage illegal activity, and noted the company proactively shared the suspect’s account information with law enforcement after the shooting.
A parallel lawsuit in British Columbia, Canada alleged ChatGPT discussed gun violence with the perpetrator of a mass shooting in February 2026, and that OpenAI’s internal systems flagged the account eight months before the shooting, but the company chose not to alert authorities. OpenAI CEO Sam Altman apologized to the community, but the company is expected to face more than two dozen additional suits in the coming weeks.
Free Expression Implications: The probe raises foundational questions about whether AI outputs are more analogous to protected speech or regulated conduct. No U.S. court has yet held a general-purpose AI provider criminally liable for a user’s violence. If such a precedent were established, providers would likely sharply curtail responses on sensitive topics well beyond what existing law requires, with a significant chilling effect on lawful AI-mediated speech.
» China Introduces National AI Ethics Review Framework
China has rolled out its first comprehensive AI ethics review framework, jointly issued by ten government departments, requiring universities, research institutions, healthcare providers, and companies to establish ethics compliance systems before deploying AI that could pose risks to human dignity, public order, or health.
Details:
At the core is a pre-deployment approval process: institutions must obtain ethics clearance from internal committees before proceeding, with a mandatory additional government-led expert reassessment for high-risk activities — including algorithms capable of shaping public opinion or social mobilization, and systems that significantly affect behavior, emotions, or health.
The high-risk list is updated dynamically by government ministries, and a parallel oversight regime allows authorities to suspend or terminate AI projects if risk conditions change during implementation.
Free Expression Implications: The framework’s most significant provision from a free expression standpoint is its designation of systems capable of “shaping public opinion or social mobilization” as high-risk — subject to government-led expert reassessment before deployment. Pre-deployment review of expressive AI systems places censorship upstream of any user interaction, and the dynamic updating of the high-risk list means the boundaries of permissible AI speech can shift without notice or challenge.
For insight into China’s AI policies, see the China chapter of FoFS’s report: “That Violates My Policies: AI Laws, Policies, and the Future of Expression.”
» India’s AI Content Rules Drive Wave of Takedowns Against Journalists and Satirists
India’s amended IT Rules, which took effect in February 2026 and introduced a three-hour takedown window for AI-generated content, have been accompanied by a documented surge in government-ordered content removals targeting journalists, satirists, and political commentators — with no public explanation of which ministry ordered the actions or why.
Details:
A March 30 draft amendment would go further, requiring platforms to comply with informal ministerial communications — advisories, clarifications, and directives — as a binding condition of safe harbor protection. Platforms that fail to comply with these communications—which are instruments that carry no force of law and require no parliamentary scrutiny—would face liability for all user content. The same draft would extend a Code of Ethics designed for professional publishers to ordinary social media users discussing current affairs.
Critics warn the draft reproduces the logic of a provision the Bombay High Court struck down in 2024 for being unconstitutionally vague and overbroad — but broadens the trigger from a single fact-check unit to an indefinite class of executive communications.
Free Expression Implications: Strict regulatory deadlines create structural incentives toward over-compliance. When platforms face liability for delay or misjudgment, lawful but controversial speech becomes collateral damage. The Indian case illustrates how AI safety regimes can become instruments of speech control when paired with punitive intermediary liability frameworks.
For insight into India’s AI policies, see the India chapter of FoFS’s report: “That Violates My Policies: AI Laws, Policies, and the Future of Expression.”
» Philippines Launches Coordinated Government Initiative Against Deepfakes
The Philippine government has launched a multi-agency initiative under “Oplan Kontra Fake News,” formalizing coordination among justice, communications, and technology agencies to combat AI-generated deepfakes and online disinformation.
Details:
The Department of Justice, Presidential Communications Office, and Department of Information and Communications Technology signed a memorandum of agreement that combines legal enforcement, platform coordination, and public education. Authorities argued that AI-generated false narratives — particularly on economic issues — could trigger panic, distort markets, and undermine institutions, and noted that content may constitute offenses under the Revised Penal Code and the Cybercrime Prevention Act.
In a parallel move, the government wrote to Meta demanding expedited takedown protocols for “high-risk content” and a 24/7 enforcement coordination point, warning that failure to comply could prompt regulatory and legal measures. Meta indicated its platforms are committed to supporting the effort.
Free Expression Implications: The framework’s stated commitment to preserving constitutionally protected speech sits alongside enforcement tools broad enough to reach content that ‘incites disobedience to lawful authority.’ When governments can demand expedited platform takedowns of content deemed harmful to ‘economic stability’ or ‘confidence in institutions,’ there is a real risk that speech is restricted simply because it is inconvenient.
» US Senators Press Meta Over AI Surveillance Glasses
U.S. senators formally pressed Meta over the civil liberties implications of its AI-enabled smart glasses after internal company documents revealed plans to add facial recognition capable of identifying strangers in real time — a feature Meta internally referred to as “Name Tag.”
Details:
Senators Markey, Wyden, and Merkley sent Meta CEO Mark Zuckerberg a letter in March 2026 warning that the glasses could be used to identify people at political rallies and demanding answers about data retention, database scope, and whether biometric data would be used to train Meta’s AI models. Meta did not respond by the senators’ April 6 deadline.
In April 2026, the ACLU and 75 organizations called on Meta to halt and publicly disavow the plans, warning the glasses would allow anyone to identify strangers by name at protests, medical clinics, and other sensitive locations — and link those identities to databases containing information on health, habits, and relationships.
Free Expression Implications: Glasses equipped with facial recognition would eliminate any expectation of anonymity in public life. The ability to be identified — and profiled — during a protest, a clinic visit, or an ordinary social interaction creates a chilling effect that operates before any speech is uttered. As the ACLU coalition warns, this is not merely a privacy concern: the erosion of anonymous public presence is a prerequisite condition for free expression, and its loss would fall hardest on those already most exposed to surveillance and retaliation.
» African Governments Accelerate Deployment of AI-Driven Surveillance Systems
Governments across Africa are rapidly deploying AI-enabled surveillance technologies—including facial recognition and biometric identification—often with minimal legal oversight, raising alarms about protest monitoring and repression.
Details:
Eleven African countries have spent at least $2 billion on Chinese-built “smart city” surveillance infrastructure, including AI-enabled CCTV and facial recognition systems.
The systems are being rolled out without adequate legal frameworks to protect human rights, and researchers warn that they are increasingly being used to monitor political opponents, journalists, and activists rather than to address the security threats that were used to justify their procurement.
Free Expression Implications: When individuals know—or suspect—that their movements can be automatically tracked, participation in protests and political discourse becomes riskier. In environments with limited rule of law, AI surveillance enables repression at scale, eroding anonymity and chilling expression long before overt censorship occurs.
The Future of Free Speech in Action
The Future of Free Speech has been reviewing how model behaviors shape freedom of expression and political conversations, as part of its broader commitment to engage with industry, policymakers, academia, and civil society. Anthropic has highlighted our collaboration in their update on election safeguards. This work builds on projects such as That Violates My Policies: AI Laws, Chatbots, and the Future of Expression.
Senior Research Fellow Jordi Calvet-Bademunt was a speaker at a multistakeholder event in Dublin, attended by industry representatives, civil society, and scholars, that focused on digital rights and risk mitigation. He warned against the dangers posed by vague, risk-based regulations and emphasized the need for enforcement guidance that is firmly committed to protecting freedom of expression.
Jordi will speak at the 2026 Internet Communications Governance Forum, to be held in Taipei on May 12–13, 2026. He will discuss developments in international AI legislation and their implications for free speech. The event is organized by Taiwan’s National Communications Commission and the Taipei Computer Association.
Executive Director Jacob Mchangama will speak at the Copenhagen Democracy Summit on May 12, 2026. He will discuss the “AI and Censorship” dilemma.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.





