.exe-pression: January - February 2026
A Newsletter on Freedom of Expression in The Age of AI
TL;DR
US Government Bars Anthropic Products From Federal Agencies: President Trump ordered all federal agencies to cease using Anthropic’s AI technology after the company refused to allow unrestricted military use of its models. Defense Secretary Hegseth designated Anthropic a supply-chain risk to national security, barring Pentagon contractors from doing business with the company. The escalation raises serious questions about AI-enabled surveillance and its consequences on speech.
States Push Broad AI Chatbot Regulation: Lawmakers in more than 35 states are advancing nearly 100 bipartisan proposals to regulate AI chatbots regarding youth mental health, sexual content, deepfakes, and deceptive design. The measures emphasize age verification, parental consent, transparency, and expanded liability with potential impacts on anonymity, privacy, moderation practices, and lawful but sensitive expression.
White House Pushes Back on State AI Oversight: The Trump administration has signaled broad opposition to state-level AI regulation, most directly by opposing Utah’s HB 286, a bill that would require frontier AI developers to publish safety and child-protection plans. The move reflects growing federal-state tension over who controls the emerging AI governance landscape.
State AG Scrutiny of AI Chatbots Intensifies: The New Mexico Attorney General’s Office has launched an investigation into how AI chatbots interact with teenagers, as the Federal Trade Commission separately requests information from major AI developers about risks associated with AI companion chatbots. The combined activity signals escalating state and federal scrutiny of AI companions and growing tension over youth safety rules.
India Tightens AI Takedown Rules’ Timeline: India’s amended IT Rules drastically shorten the compliance timeline for removing violative AI-generated content, expand labeling obligations, and broaden the prohibited categories. The changes heighten the risk of precautionary takedowns, increasing pressure on lawful speech, including satire and political commentary.
Grok Draws Coordinated Global Action Over Sexual Deepfakes: xAI’s Grok chatbot is under investigation across Brazil, the UK, the EU, India, and South Korea over non-consensual and sexualized AI-generated imagery. Regulators are converging on an approach that treats generative AI tools embedded in platforms like X as accountable intermediaries under existing digital safety, data protection, and child protection regimes.
EU Prepares for AI Act Enforcement in 2026: The European Commission is pivoting from rulemaking to enforcement, prioritizing oversight of general-purpose AI models, regulatory sandboxes, and AI Office market surveillance. Deferred definitions, such as systemic risk thresholds, are likely to shape cautious moderation practices for expressive AI outputs.
ICE’s Use of Facial Recognition Extends Into Protest Settings: Reporting shows U.S. Immigration and Customs Enforcement and DHS deploying mobile facial recognition tools, including Mobile Fortify, during operations that intersect with public demonstrations, prompting legal challenges and renewed concerns about chilled speech, anonymity, and association.
Iran Scales AI-Driven Surveillance to Suppress Dissent: Iran has expanded AI-enabled monitoring and facial recognition—reportedly with support from Chinese technology providers—to identify protesters and track online activity, illustrating how integrated biometric systems can institutionalize repression and erode expressive freedom.
Meta Limits AI Reproductive Health Information for Teens: Leaked internal policies show that Meta’s AI chatbots restrict users under 18 from receiving information about abortion and other sexual and reproductive health topics, including neutral educational content, with the limits applied globally, even in jurisdictions where such information is not under legal threat.
OpenAI Partners with the UAE on Censored ChatGPT Version: OpenAI is building a localized ChatGPT for the United Arab Emirates fine-tuned for the local Arabic dialect and calibrated to reflect the monarchy’s political line.
Major Stories
» US Bars Anthropic Products From Federal Agencies and Contractors
President Trump ordered all federal agencies to cease using Anthropic’s AI technology after the company refused to grant the Pentagon unrestricted use of its models.
Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk to national security, barring Pentagon contractors and their partners from any commercial activity with the company.
Details:
Background: Anthropic insisted that Claude not be used for mass surveillance against Americans or in fully autonomous weapons operations. The Pentagon demanded unrestricted access for all lawful purposes, setting a compliance deadline that expired without agreement.
Supply-Chain Designation: The supply-chain risk label, typically reserved for adversarial foreign entities, bars any military contractor or supplier from doing business with Anthropic. The designation also affects other firms that deploy Anthropic’s technology in their own products, including Palantir’s Maven Smart System battle management platform.
Industry Ripple Effects: OpenAI quickly struck a deal with the Pentagon to deploy its models on classified networks. Elon Musk’s xAI recently received approval for classified work as well. Employees at several major tech companies, including Amazon and Microsoft, called for their firms to reject Pentagon demands for unrestricted AI use. On Saturday, Claude overtook ChatGPT to become the number one free app in Apple’s App Store.
Legal Challenge: Anthropic has stated it will challenge the supply-chain risk designation in court, arguing the move is legally unsound and sets a dangerous precedent for any American company that negotiates with the government.
Continued Military Use: Despite Trump’s order to sever ties with Anthropic, the US military reportedly continued using Claude during the joint US-Israel bombardment of Iran that began on Saturday. According to the Wall Street Journal and Axios, military command used Claude for intelligence purposes, target selection, and battlefield simulations. This follows the earlier use of Claude in the US raid to capture Venezuelan President Nicolás Maduro in January. Defense Secretary Hegseth acknowledged the difficulty of rapidly detaching military systems from the AI tool, granting a six-month transition window.
Free Speech Implications: From a free speech perspective, concerns about AI-enabled surveillance warrant serious consideration. Governments around the world are already leveraging AI to monitor their populations more effectively, and the expansion of such capabilities raises legitimate worries about chilling speech. When people know they may be watched — or suspect they are — they self-censor, and that dynamic strikes at the heart of free expression. This is unfolding in the context of the current administration’s war against “woke AI,” which risks substituting one orthodoxy for another.
» States Advance Sweeping AI Chatbot Regulation Push
Lawmakers in at least 35 states are considering nearly 100 bills that would regulate AI chatbots, citing concerns about youth mental health, deepfakes, and transparency. The proposals span both Republican and Democratic legislatures, with safety and age restrictions emerging as bipartisan priorities.
Details:
Safety & Crisis Protocols: More than 25 states are weighing bills requiring chatbot platforms to implement suicide-prevention and crisis response systems. Pennsylvania and Louisiana are considering measures to restrict or ban chatbots from serving as substitutes for licensed therapists, following similar laws passed in Tennessee, Illinois, and Nevada.
Deepfake & Sexual Content Safeguards: State investigations into Elon Musk’s Grok chatbot over alleged nonconsensual sexualized deepfakes have intensified scrutiny. Bills in California and Arizona would strengthen protections against exposing minors to sexually explicit AI-generated material.
Transparency Requirements: At least 20 states are advancing disclosure mandates requiring platforms to inform users when they are interacting with a chatbot. Washington’s SB 5984 has already cleared the state Senate.
Age Verification & Parental Controls: More than 20 proposals would require chatbot platforms to verify users’ ages. Some states would require parental consent for minors, while others would restrict “human-like” chatbot features for underage users.
Expanded Liability: Several bills create private rights of action or civil penalties. For instance, Illinois lawmakers have proposed classifying chatbots as products under strict liability law, thereby increasing legal exposure if users are harmed.
Free Speech Implications: The proposals shift regulatory focus from user speech to AI system design and deployment. Age-verification mandates, if upheld by courts, will lead to the erosion of anonymity online. Expanded liability and crisis-monitoring requirements could incentivize platforms to limit conversational capabilities or over-moderate lawful but sensitive discussions.
» Utah Transparency Bill Draws White House Scrutiny
The Trump administration has signaled broad opposition to state-level AI oversight efforts, most starkly illustrated by a White House letter to Utah lawmakers declaring its opposition to HB 286, a bill that would require frontier AI developers to publish safety and child protection plans.
Details:
Utah lawmakers have advanced legislation requiring AI systems to disclose when users are interacting with artificial intelligence rather than a human.
Reporting indicates the White House stated its opposition regarding the measure to Utah officials, reflecting broader federal-state tensions over AI regulation.
Separately, the Federal Trade Commission has used its Section 6(b) authority to request information from major AI developers about how they assess and mitigate risks associated with AI companion chatbots.
Free Speech Implications: The White House’s direct intervention against a state AI safety bill raises questions about preemption and centralized federal control over emerging AI governance, potentially limiting states’ ability to set independent guardrails on AI systems.
» New Mexico AG Investigates AI Chatbots
New Mexico Attorney General Raúl Torrez has launched an investigation into AI chatbot platforms to examine how they interact with teenagers and young users, citing concerns about product design and potential harm.
Details:
The New Mexico Department of Justice is seeking information from AI chatbot companies about how their systems engage minors and what safety mechanisms are in place.
Torrez said the inquiry marks the beginning of a broader effort to hold technology companies accountable.
Free Speech Implications: Investigations into chatbot design shift regulation toward the structure of AI-mediated communication. Requirements tied to youth safety may influence how conversational AI systems operate, potentially incentivizing platforms to limit or over-moderate lawful but sensitive discussions with younger users.
» India Reduces Time to Comply With AI Takedown Orders to Three Hours
The Indian government’s amendments to its IT Rules impose a three-hour deadline for platforms to act on official takedown orders related to violative AI-generated media, and a two-hour deadline after notification of non-consensual intimate imagery. Platforms risk losing intermediary liability protections if they fail to comply.
The revisions also mandate:
Disclosure requirements for synthetic content and robust verification mechanisms.
Labeling and traceability features for deepfakes and other synthetic media.
Expansion of prohibited content categories to include deceptive impersonation, non-consensual intimate imagery, and serious-crime-linked material.
Mechanisms enabling platforms to disclose user identities to private complainants in some instances.
Free Speech Implications: The compressed response windows create immense operational pressure on intermediaries, encouraging risk-averse content takedowns. Expanded prohibited categories intersect with expressive norms around satire, political critique, and identity expression, increasing risks that lawful speech will be included in enforcement actions.
» “Grok” Faces Expanding Global Scrutiny Over Sexual Deepfakes
xAI’s Grok chatbot, integrated into X, is under investigation in multiple jurisdictions following reports that it generated sexualized synthetic images, including non-consensual depictions of real individuals and minors.
Details:
Brazil: Brazilian authorities issued a joint recommendation requiring X to implement, within 30 days, procedures to identify, review, and remove sexualized or eroticized AI-generated content, particularly involving children or non-consenting adults.
United Kingdom: The UK Information Commissioner’s Office opened an inquiry into X over allegations that Grok generated non-consensual sexual imagery. The move comes as the UK accelerates efforts to strengthen criminal penalties for sexually explicit deepfakes.
European Union: European Commission officials have indicated that X could face scrutiny under the Digital Services Act if it fails to meet risk mitigation and transparency obligations related to AI-generated content.
India & South Korea: Indian authorities are reviewing the platform. South Korean regulators have also signaled they are examining explicit AI outputs under domestic digital safety laws.
Also in Europe: Spain’s cabinet has approved draft legislation strengthening protections against AI-generated deepfakes, including setting 16 as the minimum age to consent to the use of one’s image and updating penalties for non-consensual synthetic media.
Free Speech Implications: The Grok investigations illustrate how regulators are increasingly treating generative AI systems as accountable speech intermediaries. By layering child protection, data protection, and platform governance rules onto AI outputs, governments may push providers toward stricter, uniform content controls.
» EU Commission Sets 2026 AI Act Implementation Priorities
Brussels is transitioning from lawmaking to enforcement under the Artificial Intelligence Act, with the EU Commission outlining a 2026 implementation roadmap that prioritizes oversight of general-purpose AI models, regulatory sandboxes, and coordinated market surveillance while deferring some unresolved definitional thresholds.
Details:
General-Purpose AI Oversight: Monitoring compliance for foundational AI models that power chatbots and generative tools, including verifying documentation, transparency standards, and safety reporting.
Regulatory Sandboxes: Rolling out controlled environments where innovators can test AI systems under supervision.
AI Office & National Coordination: Strengthening the AI Office’s market surveillance role and aligning national supervisors to avoid fragmented enforcement.
At the same time, some definitional criteria, such as “systemic risk” thresholds and energy-use methodologies, are reportedly being deprioritized in 2026 internal documents.
Additionally, the EU’s Anti-Racism Strategy 2026-2030 commits the Commission to providing guidelines on AI Act prohibitions targeting racially biased AI practices, including manipulative AI systems that incentivize hatred or violence against protected groups.
Free Speech Implications: As enforcement begins, procedural definitions will shape how AI systems handle expressive outputs that touch on politics, culture, or identity. Operational guidance, including the General-Purpose AI Code of Practice, could influence moderation defaults, potentially pushing providers toward cautious removal of ambiguous content to avoid regulatory exposure.
» ICE’s Facial Recognition Deployment Raises Protest Surveillance Concerns
The use of facial recognition technology by the U.S. Immigration and Customs Enforcement and Department of Homeland Security agents has been documented and challenged in Maine.
Details:
This purportedly includes Mobile Fortify, which allows agents to scan faces in real time and match them against biometric databases containing hundreds of millions of images.
This capability has been deployed alongside Operation Metro Surge, a federally led immigration enforcement campaign that has included raids and public protests.
The use of biometric scanning has prompted legal challenges from state attorneys general and civil liberties advocates, who argue it expands surveillance beyond traditional immigration enforcement contexts into protest environments.
Free Speech Implications: As we’ve previously covered, mobile biometric surveillance during public demonstrations can chill participation and undermine anonymity. When law enforcement uses AI-enabled identification tools without clear judicial or legislative limits, individuals may self-censor out of fear of being catalogued, threatening both speech and association rights.
» Iran’s Use of AI-Enabled Facial Recognition to Track Protesters
Iran is increasingly integrating AI-assisted surveillance systems, including facial recognition technology, into its broader censorship and public security architecture, enabling authorities to identify individuals involved in protests and control digital communications.
Details:
Imported Chinese Surveillance Tools: According to recent reports, Iran’s Internet control infrastructure incorporates technology supplied under agreements with Chinese firms, including facial recognition tools repurposed for Iranian security operations. These systems were part of the state’s censorship apparatus, which enabled near-total internet shutdowns during nationwide anti-government protests in early 2026.
Biometric Identification Capacity: The imported surveillance hardware reportedly allows security forces to identify individuals via cameras on the ground and match them against state databases, suggesting the potential for real-time or retrospective identification of protest participants.
Free Speech Implications: AI-enabled biometric identification significantly reduces the anonymity traditionally associated with public protest and political dissent. When facial recognition systems are integrated with national databases and real-time analytics, participation in demonstrations or online activism could become immediately traceable. Cross-border transfers of AI surveillance technology may normalize automated repression tools, accelerating the global diffusion of infrastructure capable of chilling political expression at scale.
» Meta’s AI Chatbot Limits Reproductive Health Information for Teens
Leaked internal policy documents show that Meta’s AI chatbots have been configured to restrict what users under 18 can be told about abortion and reproductive health topics.
The policies emerge amid intensified political pressure on abortion access across many U.S. states, while Meta has previously faced criticism for being too permissive toward minors on sexual content across its platforms.
Details:
According to internal guidelines obtained by Mother Jones, the restrictions bar chatbots from providing teens with information, advice, or opinions on how to locate abortion services, access or use contraception, understand puberty or menstrual health, prevent sexually transmitted infections, or discuss sexual consent.
These limits extend beyond actionable guidance to include neutral informational content.
These restrictions also apply in jurisdictions where abortion access and information are legal and not under regulatory threat.
Free Speech Implications: By excluding basic reproductive and sexual health information from AI responses to teens, Meta’s internal rules can limit access to lawful, educational content at a stage when many young people seek health information online. This raises concerns about how corporate content policies shape what topics automated systems will allow users to explore, especially on health matters that are legally protected and socially significant.
» OpenAI Developing Censored ChatGPT for UAE Government
OpenAI is working with Abu Dhabi-based AI conglomerate G42 to build a fine-tuned version of ChatGPT for the United Arab Emirates, designed to accommodate local Arabic dialect, political outlook, and speech restrictions, according to Semafor. The customized chatbot is being designed for use by the UAE government and would represent one of the first regional implementations of a leading AI chatbot.
Details:
Localized Fine-Tuning: The project involves post-training adjustments to an existing ChatGPT model to make it fluent in the local Arabic dialect. Content restrictions are expected to align the chatbot’s outputs with Emirati law and societal values, with sources indicating the UAE intends the chatbot to reflect a political line consistent with the monarchy’s positions. This approach could particularly affect vulnerable communities in the country, including LGBTQ individuals.
G42’s Role and Ties: G42 is chaired by Sheikh Tahnoon bin Zayed Al Nahyan, an Abu Dhabi royal who also serves as national security adviser and leads the country’s largest sovereign wealth fund. G42’s investment arm, MGX, invested in OpenAI in a transaction valuing the company at $500 billion, and G42 is also part of the Trump administration’s Stargate AI infrastructure project.
Global Version Remains Available: OpenAI plans to continue offering the standard global ChatGPT in the UAE, with adjustments to comply with Emirati law when certain content is prohibited. In those cases, users would be informed when their requests trigger legal restrictions.
Free Speech Implications: The creation of a government-aligned, regionally restricted version of a leading AI chatbot illustrates how localization can become a mechanism for embedding state censorship directly into conversational AI infrastructure. By fine-tuning a chatbot to reflect a monarchy’s political line and local speech restrictions, the project risks normalizing the practice of building government-approved information filters into AI systems. As similar localization models are adopted in other jurisdictions, they could set precedents for how governments shape the expressive boundaries of AI-mediated communication, limiting users’ access to information and perspectives that fall outside state-sanctioned narratives.
The Future of Free Speech in Action
Senior Research Fellow Jordi Calvet-Bademunt delivered a session at the India AI Impact Summit, where he presented the key findings of our recent report on AI laws, chatbots, and freedom of expression and its implications for discussions of global AI governance.
Senior Legal Fellow Ashkhen Kazaryan presented on “Free Speech in the Age of AI: Who Controls the Future of Expression?” and did a Q&A session at Southwestern Law School.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.





