Will AI Supercharge Free Expression or Suppression?
An adapted version of my remarks from the Copenhagen Democracy Summit.

Earlier this week, I spoke at the Copenhagen Democracy Summit about how generative AI is reshaping the future of free expression. What follows is an adapted version of my remarks, featuring some of the most promising examples of “Creative AI”, the rising dangers of “Intrusive AI”, and the global tug-of-war between openness and control. You can also watch the full video of my talk at 43:28 here. I explore this topic in greater depth in my forthcoming book, The Future of Free Speech, co-authored with Jeff Kosseff and due out from Johns Hopkins University Press in 2026.
When it comes to AI and free speech, I find it helpful to distinguish between two broad categories that we might call Creative AI and Intrusive AI.
Creative AI can be incredibly empowering. It helps us think, research, and create more effectively. It’s like an exoskeleton for the mind. And it’s no longer just about chatbots. Generative AI is quickly becoming the backbone of how we search, write, email—how we access and process information. It’s turning into the default interface for knowledge and expression. So how it’s trained, filtered, and governed won’t just shape what we see and say—it may shape how we think.
From a free speech perspective, one of the most promising features of Creative AI is its ability to empower dissidents in authoritarian states. In Belarus, the banned opposition launched an AI-generated candidate to share uncensored ideas. In Venezuela, independent journalists are using AI-generated avatars to report the news and avoid arrest. And in Russia, the exiled outlet IStories developed an AI tool called Charon, which identified over 100,000 Russian soldiers killed or missing in Ukraine—data-driven journalism that would be impossible from inside the country.
But there’s a darker side: Intrusive AI, which is used to monitor, predict, and manipulate behavior, often without consent. In the U.S., the State Department’s “Catch and Revoke” program tracks visa holders’ social media for signs of dissent. And in Russia and China, we’re seeing the authoritarian endgame: real-time surveillance, facial recognition, and AI-powered censorship at scale.
The stakes are high. AI could supercharge free expression—or supercharge its suppression. Which path we take depends on the choices we make now.
Last year, The Future of Free Speech tested 268 prompts on six major chatbots—including ChatGPT, Gemini, and Claude—to see how they handled controversial but legal topics. The results were troubling: about half the time, these AI systems refused to respond. Topics like transgender athletes in sports or the COVID-19 lab leak were simply off-limits—not because the content was illegal, but because the models were over-correcting in the name of safety.
But here’s the good news: things are changing. When we repeated our tests this year, refusal rates dropped significantly—from around 50% to 25%. OpenAI introduced an “intellectual freedom” policy. Anthropic cut unnecessary refusals nearly in half. Grok had a perfect record with zero refusals.
We also tested how chatbots handled sensitive topics about China. Most U.S. models passed. But DeepSeek—a powerful Chinese open-source model that performed well otherwise—completely shut down on issues like Taiwan, Tiananmen, and Xinjiang. It’s trained to reflect “core socialist values,” which is code for whatever the Communist Party wants you to believe.
So yes, we’re seeing real progress toward more open, viewpoint-diverse AI. But that progress is fragile. New political winds, profit motives, or social pressures can easily change the calculus.
This is especially true when so much power is concentrated in the hands of so few. We’ve seen this play out with Elon Musk and his technology ventures. On X, his social media platform, he’s imposed ideological moderation rules, punished progressive voices, sued critics, and complied with takedown demands from governments like India and Turkey. That’s a far cry from his claim to be a free speech absolutist.
But interestingly, Musk’s chatbot Grok tells a different story. When I asked Grok about Musk’s role in DOGE, it gave a long, detailed answer and concluded that his involvement raises serious ethical and legal concerns due to his dual role as a government advisor with oversight over areas affecting his own businesses.
Still, the broader concern remains: with so much power in so few hands, AI companies may suppress content. That’s why open-source models are so important. They allow anyone to access and modify the technology, helping preserve freedom of thought and access to information.
Of course, openness comes with risks—disinformation, hate speech, and abuse. But that tradeoff is inherent to any communication technology that democratizes access to information. The printing press and the Internet both paved the way for disruptive reformations and revolutions. The challenge isn’t to eliminate all harms, but to manage and mitigate them in ways that don’t undermine the foundations of open societies.
AI is changing how we engage with information, and not always for the better. It allows bad actors to scale attacks on the information ecosystem, potentially eroding the shared reality that democracies rely on for collective action. In our recent global survey across 33 countries, we found strong public support for regulating AI-generated speech and very low tolerance for controversial content like political deepfakes or images mocking national symbols. That fear is now driving new legislation across democracies.
But it’s important that we not freak out and keep our perspective. Take the 2024 panic over AI-driven disinformation. Yes, we saw deepfakes and viral hoaxes. But studies from Princeton, the EU, and the Alan Turing Institute found no evidence that AI disinformation changed election results in the U.S., Europe, or India. The real threat wasn’t the content; it was the erosion of trust. And clamping down on speech tends to make that worse, not better.
That’s why Taiwan’s model deserves more attention. Under constant pressure from Chinese propaganda, Taiwan didn’t respond with censorship. Instead, they built a civic-tech ecosystem rooted in openness and public participation. Fact-checking in Taiwan is often crowdsourced and increasingly AI-enhanced. Tools like Cofacts let users submit suspected disinformation, which is then reviewed by a combination of community members and automated tools. The process is fast, transparent, and bottom-up.
Taiwan has also pioneered alignment assemblies—citizen deliberation platforms that foster shared understanding rather than fueling outrage. They show how technology can strengthen, rather than undermine, democratic resilience.
These are lessons for the wider democratic world. If Taiwan can resist authoritarian interference without sacrificing free speech, then so can we.
This brings us to regulation. We’re seeing two competing models of AI governance emerge—what I call User Empowerment versus Preemptive Safetyism.
User Empowerment sees generative AI as a powerful but neutral tool. Harm lies not in the tool itself, but in how it’s used and by whom. This approach affirms that free expression includes not just the right to speak, but the right to access information across borders and media—a collective good essential to informed choice and democratic life. Preemptive Safetyism, by contrast, treats certain speech as inherently harmful, regardless of context or intent, and aims to block it at the source. Like pre-publication censorship in the age of the printing press, it seeks to prevent disfavored ideas from being created and diffused at all.
And Preemptive Safetyism is gaining ground. In the U.S., states like California and Minnesota have passed laws targeting deepfakes in elections. But some go too far. Minnesota criminalizes AI-generated content that might “influence” an election or “injure” a candidate—language so vague it could chill satire or dissent. A federal judge in California blocked a similar law, calling it “a hammer instead of a scalpel.”
Europe’s new AI Act blends both instincts. It mandates watermarking—fair enough. But it also requires platforms to mitigate undefined “systemic risks to society.” That vagueness could push companies to preemptively scrub lawful, but controversial, speech.
In China, the logic of Preemptive Safetyism has been taken to its extreme. As mentioned previously, AI is explicitly required to align with “core socialist values,” mixing censorship with state propaganda.
So yes, some AI regulation is necessary to prevent catastrophic harms and narrow categories of content like child sexual abuse material. But if democracies regulate out of fear instead of principle, bad governance could easily end up being more dangerous than bad content.
Jacob Mchangama is the Executive Director of The Future of Free Speech and a research professor at Vanderbilt University. He is also a senior fellow at The Foundation for Individual Rights and Expression (FIRE) and the author of Free Speech: A History From Socrates to Social Media.
Watch my full conversation with CBS News’ Melissa Mahtani from the Copenhagen Democracy Summit starting at 43:28 in this video.
