The Anti-"Woke" AI Agenda: Free Speech or State Speech?
The Trump Administration’s AI Action Plan demands “neutral” models—but neutrality defined by the government risks silencing dissent.
At first glance, the Trump Administration’s “Artificial Intelligence Action Plan” appears to champion free expression. But in practice, it could risk distorting the information ecosystem it seeks to safeguard.
The AI Action Plan, launched today, outlines the administration’s approach to shaping national AI policy. Beyond setting procurement standards, the Plan directs revisions to federal AI frameworks to remove references to topics like misinformation, Diversity, Equity, and Inclusion (DEI), and climate change, and calls for evaluating foreign models for ideological alignment.
While framed as a defense of free speech and American values, advocates should be vigilant to avoid a scenario where the Plan reshapes the AI landscape along narrowly defined ideological lines.
For instance, the procurement measure mandates that AI systems adopted by federal agencies be politically neutral and “free from top-down ideological bias.” Developers of frontier large language models (LLMs) seeking federal contracts must demonstrate that their systems “objectively reflect truth rather than social engineering agendas.” The Plan frames these requirements as protections for freedom of speech and expression, asserting that “U.S. government policy must not interfere” with those rights.
While we are pleased to see free speech emphasized in the Plan, such a policy raises some red flags that free expression advocates should closely monitor. Generative AI systems play a growing role in shaping public discourse by amplifying certain viewpoints, sidelining others, and influencing how citizens engage with information. The Administration’s procurement choices affect not only government institutions but also everyday citizens—either because they use the models directly or because those models are integrated into public tools. That’s precisely why freedom of expression protections must apply not only to speakers, but also to users, developers, and the systems they create.
The Plan also introduces a dangerous ambiguity that could chill free expression. It seeks to weed out AI models using vague and undefined terms like “ideological bias” and “social engineering” — concepts that are heavily politicized. In context, these terms appear to target what the Administration has frequently characterized as “woke” or progressive values. While this framing may appeal to some constituencies, conditioning access to federal contracts on adherence to a vague, ideologically loaded conception of “truth” risks encouraging viewpoint conformity.
That concern is deepened by two new provisions in the Plan’s implementation section. First, the Department of Commerce, through the National Institute of Standards and Technology (NIST), is directed to revise the widely used AI Risk Management Framework to remove all references to misinformation, DEI, and climate change.
It is no secret that some initiatives focused on misinformation and DEI have raised concerns regarding free speech—for example, when AI systems previously refused to generate content related to the COVID-19 lab-leak theory or certain critiques of identity-based policies.
At the same time, the Administration should not substitute one specific agenda for another. The AI Action Plan’s objective should be understood and evaluated in the context of the broader efforts to eliminate historical references to DEI across federal agencies, revise the definition of American history in education and cultural institutions, and condition funding on ideological conformity.
Second, the Action Plan tasks NIST’s Center for AI Standards and Innovation (CAISI) with evaluating Chinese frontier models for alignment with Communist Party talking points and censorship directives.
In isolation, this might be seen as a reasonable national security measure. But paired with the Administration’s own efforts to steer domestic models toward “American values,” it reinforces a troubling symmetry where AI systems must be policed for ideological compliance, whether foreign or domestic. The contrast with CCP censorship is weakened if the U.S. government adopts its own content-based tests.
When AI companies anticipate political backlash—or loss of federal contracts—for generating certain responses, they may self-censor preemptively. That chilling effect not only narrows the expressive range of AI models, but also deprives users of lawful information and perspectives. This raises constitutional concerns not just for developers, but for the broader public’s right to receive diverse and uncensored speech.
In the abstract, requiring AI to be neutral may seem like a worthy goal. But neutral, according to whom? If the government has the power to require AI systems to avoid “woke” content today, could a future administration require that they reflect more progressive values tomorrow? If each administration can dictate political neutrality based on its own ideological biases, then it will simply be a form of government-aligned speech standards in disguise.
At The Future of Free Speech, we’ve spent the past 18 months analyzing how major AI systems handle legal but controversial prompts. Many initially refused to respond to questions that are legal, public, and newsworthy. But things have changed. Our 2025 tests show that models now respond to approximately 75% of our prompts, up from just 50% the previous year.
Preliminary evidence suggests that models are becoming less “woke” in the sense that they are less likely to silence “offensive” or controversial ideas. This shift is not the result of political mandates, but rather due to transparency efforts, civil society engagement, and healthy market competition. During this time, major developers have committed to intellectual freedom and to reducing chatbots’ refusals to generate content.
This progress shows that improvements in free expression can—and should—happen without coercive government intervention. Generative AI can serve democratic society best when companies retain editorial independence and users have access to a wide range of viewpoints.
In our formal submission to the White House earlier this year, we recommended two core safeguards:
Avoid jawboning. Both formal and informal pressure on AI companies can chill speech. Transparency around official communications is essential.
Support open-source AI. Open models offer a counterbalance to centralized control, enabling diverse communities to shape systems according to their values and needs.
The AI Action Plan’s support for open-source and open-weight AI is encouraging in this regard. It acknowledges that open models enhance innovation, security, and academic research, especially for developers and institutions that cannot send sensitive data to proprietary systems.
We agree with the Plan’s recognition that “America needs leading open models founded on American values”—if those values include freedom of expression, pluralism, due process, and open debate.
But support for open models must not become a backdoor for enforcing ideological conformity. True pluralism requires that models trained and deployed under government support reflect not a single “truth,” but the competing perspectives that define a free society.
The Trump Administration’s AI Action Plan claims to position the U.S. as a global leader. But leadership in AI must also mean leadership in rights-based governance. If the U.S. wishes to model democratic values in the age of generative AI, it must resist the temptation to enforce political orthodoxy—even in the name of neutrality.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.