Even Revolutionary AI Is Protected by The First Amendment
The First Amendment has weathered anarchists, communists, and panic-driven precedent; it can weather the chatbots, too.
In a normal conversation about whether AI outputs are protected by the First Amendment, there wouldn’t be much to discuss. We’re dealing, after all, with LLMs—large language models. Words. Ideas. The Supreme Court has affirmed your right to receive information, from virtually any source, many times over.
It has also repeatedly held that editors — compilers of information — have the right to control their expressive compositions. The First Amendment covers chatbots coming and going. It says the government may neither dictate how AI firms train and design LLMs nor alter the outputs you seek from them. Case closed.
These, however, are not normal times. Can you feel the AGI? The stochastic-parrot crowd was comically wrong. The chatbots have become really good, and they’re only getting better. Frontier labs are updating their models at a startling clip. They now talk openly about recursive self-improvement—the models designing their own successors. Calls are growing for AI to be paused, outlawed, nationalized, maybe blessed by a priest. The future has arrived, and it is weird.
Fortunately, America was built for this. We are weird. “In the United States,” James Madison wrote, “the People, not the Government, possess the absolute sovereignty.” Thus it is, he concluded, that “the censorial power is in the people over the government, and not in the government over the people.”
In this country, we don’t cower before an aristocracy. We don’t obey an established church. We rule ourselves — which means, crucially, that we think for ourselves. No overseer tells us what we may read or whom we may trust. No one gets to decide which ideas are too dangerous to entertain. You may recall that we fought a revolution over this. We are unruly, even riotous. We are anti-tutelary. We are not housetrained. Don’t tell us what to think.
In a country like this, the arrival of AI is not a crisis; it’s a Tuesday. A machine that will speak in almost any voice, discuss almost any topic, and offer almost any opinion is the most goddamned American thing there could be. LLMs should be allowed to flourish, free of government censorship, lest we become traitors to our history and our culture. Don’t you dare claim that the Founders would disagree. Those fractious men loved them some political dissent. Who, they’d have exclaimed, is to tell them they can’t use an LLM to draft rebellious tracts or explore forbidden thoughts?
Fine, the fears are not baseless. In the words of journalist Stephen Witt, “a mechanical brain with one hundred trillion synapses firing at five billion cycles a second ha[s] no precedent in history, religion, or philosophy.” Hard to argue with that. Nor are the leaders in the field helping matters with their public musings about how AI will destroy entry-level jobs and, oh yeah, just maybe kill everyone. Maybe this warrants new taxes, stronger antitrust laws, or fresh intrusions into the labor market. But when it comes to AI as a medium of expression, we should stick to our principles.
There are three possibilities. The first is that the doomsayers are right: AI becomes superintelligent, then turns us all into paperclips. But the notion that the “god in a box” will abruptly turn on us piles speculation on speculation. Building First Amendment doctrine around the disaster hypothesis would be like stripping every major newspaper of constitutional protection on the theory that a Bond villain might take over all of them in a bid for world domination. It could happen!
Some people respond to every conceivable threat by clamoring for a more powerful government. You start to think the dream of an all-powerful government is what’s truly driving the bus. This is no way to do constitutional law. In any event, if very evil AI really comes swerving around the corner, we’ll have bigger problems than quarrels over the First Amendment — and we’d only compound those problems by making the government a bigger leverage point for HAL 9000.
The second possibility is that AI turns out to be ordinary, albeit very capable, technology. It’s smart, and it’s helpful, but it’s not destined to become superintelligent any time soon. It makes people better informed and more articulate, but it is not itself so overbearingly persuasive that it can brainwash people into thinking things. (Alternatively, it is dazzlingly good at argument and rhetoric, but you believe that people’s opinions run deeper than that.)
In this situation, AI is a cool new medium, with the mix of benefits and drawbacks that that entails, and nothing more. It should be treated as constitutionally protected expression, just like all the other cool new media that have come before. Carry on. As you were.
Under the third — and in light of recent events, most likely — scenario, AI, though not Skynet, is indeed revolutionary. It reshapes what people think about themselves and the world. It’s a big deal. But stale narratives and conventional wisdom don’t need constitutional backing. A “marketplace of ideas” sounds antiseptic, but it is in fact a radical commitment. America is always open to fresh ways of seeing things. That is why the First Amendment exists.
We’ve grappled with powerful new ideas before. There was a time when Marxism seemed poised to sweep all before it. One evening in June 1919, an anarchist named Carlo Valdinoci approached the Washington home of the attorney general. He was carrying a large bomb, but it went off early and killed him. Franklin and Eleanor Roosevelt, who lived across the street, were shaken by the blast. Valdinoci and many others wanted to hasten the collapse of the capitalist order. Faced with this threat—one that serious people regarded as existential — the Supreme Court for quite some time did almost everything wrong. It upheld convictions for circulating pacifist, anarchist, and communist literature. It let the Socialist Party’s candidate for president go to prison for making a campaign speech. It blessed prosecutions that all but outlawed active membership in the Communist Party.
When Whittaker Chambers renounced communism in 1938, he believed he was leaving “the winning world for the losing world.” His mind had not changed when he testified before Congress a decade later, nor when he died in 1961. The threat persisted. Yet the Supreme Court eventually found its nerve.
In the 1950s and 60s, a run of decisions chipped away at the Red Scare precedents. Then came Brandenburg v. Ohio (1969), which held that advocacy can be punished only when it is meant to incite imminent lawless action and is likely to do so. It took half a century, but the First Amendment emerged from the Red Scare stronger than it went in.
The parallel between Valdinoci and Daniel Moreno-Gama — the young man who allegedly hurled a Molotov cocktail at Sam Altman’s home this month while carrying a doomer manifesto — is, to put it mildly, available. Panic makes bad law.
As before, so again: we should expect setbacks in court. Some judges will lean into their anxieties and withhold First Amendment protection from AI outputs. But “in calmer times,” Justice Hugo Black wrote, dissenting in one of the now-repudiated Red Scare decisions, “when present pressures, passions and fears subside,” the Supreme Court “will restore the First Amendment liberties to the high preferred place where they belong in a free society.”
AI may yet reveal stupendous ideas and deeper truths. As it does, we should stand by the First Amendment. We should not be afraid of new knowledge.
Corbin K. Barthold is Internet Policy Counsel at TechFreedom. This article is adapted from his new paper, AI + 1A: Why the First Amendment Protects Artificial Intelligence.



