AI, Mass Surveillance, and The Fight for Democratic Limits
Unchecked AI surveillance is not only a privacy problem; it is a free speech emergency.
Imagine a large language model (LLM) that can identify you from your anonymous online posts just by analyzing your writing style, interests, and passing mentions of obscure things related to your life. Now imagine those tools in the hands of a government apparatus, with no limits, checks, or safeguards.
A recent contract dispute between Anthropic and the U.S. Department of War illustrates how urgent these issues have become.
Anthropic, developer of the chatbot Claude, agreed in 2025 to integrate its models into classified military workflows. But it drew two red lines: its systems would not be used for mass surveillance of Americans and would not power fully autonomous weapons.
Recently, the Pentagon sought broader language permitting use for “all lawful purposes.” When Anthropic declined to accept revised terms, it said that it lacked meaningful safeguards. The administration then moved to phase out the company’s products and designated it a national security supply-chain risk. OpenAI subsequently struck a deal with the Pentagon to deploy its models on classified networks, and xAI recently received approval for classified work as well.
But the Anthropic clash was not just about one company or one contract. It surfaced a structural tension that will define AI governance for decades: as technical capacity to surveil entire populations expands, who decides where the limits are and how those limits are enforced?
Throughout history, mass surveillance was expensive. It required personnel, time, and physical proximity. Governments that wanted to monitor their citizens faced practical constraints that, in effect, served as a check on the scope of state surveillance.
Artificial intelligence and the ease of large-scale data collection are dissolving those costs and constraints. The capacity to collect, process, and interpret human behavior at scale is expanding so rapidly that the practical limits that once bounded surveillance are falling away, and legal boundaries are struggling to keep pace.
The stakes extend beyond privacy: mass surveillance, left unchecked, is a direct threat to freedom of expression: the right to speak, to assemble, and to dissent without fear of identification and retaliation. When citizens know that their faces, movements, and words are being recorded and analyzed, the calculus of speaking out changes.
The question is no longer just “do I have the right to say this?” but “will I be identified for saying it, and what might follow?” AI-powered surveillance is reshaping that calculus, making the chilling effect on expression not occasional but pervasive.
The Global Spread of AI Surveillance
This is not a hypothetical debate about future capabilities. Around the world, governments are already deploying AI systems to identify individuals in public spaces, aggregate data across databases, analyze online speech, and generate risk assessments at scale.
Surveillance infrastructures are expanding across political systems, sometimes in the name of security, public order, or administrative efficiency. The pattern is visible across political systems — authoritarian, hybrid, and democratic — and a closer look at each category reveals how surveillance tools spread across borders and down the spectrum of regime types.
Authoritarian Regimes: The Surveillance Laboratory
AI surveillance is most advanced in authoritarian states, where it has been integrated into the architecture of political control. In China, the government has deployed up to 600 million surveillance cameras, many equipped with AI-powered facial recognition. In Xinjiang, this infrastructure reached its most extreme expression: the Integrated Joint Operations Platform aggregates data from facial recognition cameras, iris scanners, checkpoints, and financial records to flag behavior it deems suspicious and trigger detention automatically. Researchers have described the region as a frontline laboratory for AI-driven social control.
Critically, the technology does not stay in Xinjiang or in China. Chinese firms have exported surveillance technology to over 80 countries. Hikvision and Dahua, both partly state-owned, together dominate the global surveillance camera market. Human rights organizations have documented how these systems have been used to monitor dissidents and suppress opposition in Kenya, Uganda, Ethiopia, and Ecuador, among others. Now, Chinese companies are developing large language models for minority languages, including Uyghur and Tibetan, that could be used to monitor communications and control the information those communities receive.
In Iran, a 2025 United Nations report documented the regime’s use of drones, facial recognition, and a government-backed app called “Nazer” to enforce mandatory hijab laws. Facial recognition cameras have been installed at university entrances, and parliament approved legislation authorizing expanded AI surveillance of women, with penalties of up to ten years in prison.
Democracies and Hybrid Regimes: The Migration of Surveillance Tools
But AI-driven surveillance is not constrained to authoritarian regimes. It has also gained ground elsewhere, including in Europe and the Americas. In March 2025, the Hungarian Parliament passed amendments banning LGBTQ+ gatherings and expanding facial recognition surveillance to identify attendees. Ahead of Budapest Pride in June, authorities installed additional cameras across the city center to identify participants. Civil‑society groups and MEPs urged the European Commission to investigate whether Hungary’s actions violated the EU AI Act’s prohibition on real‑time biometric identification in public spaces, and Commission officials indicated they would assess compliance with the AI Act.
The use of AI for policing is expanding even in countries with strong democratic traditions. Ahead of the 2024 Paris Olympics, France became the first EU country to legalize algorithmic video surveillance in public spaces, authorizing AI systems to scan crowds in real time for suspicious behavior. After the Games, the Paris police prefect publicly backed making the technology permanent, and the incoming prime minister at the time, Michel Barnier, pledged to generalize the methods used during the Olympics.
In the United Kingdom, the Metropolitan Police scanned more than 3.5 million faces in 2025 and is rolling out facial recognition vans to forces across England and Wales. Civil liberties groups note that 80 percent of innocent people wrongly flagged by the system were Black, and that the entire expansion is proceeding without any dedicated legislation or a vote in Parliament.
And in the United States, the federal government has dramatically accelerated AI-powered surveillance, with immigration enforcement as the primary driver. The Department of Homeland Security’s latest AI inventory disclosed more than 200 active or in-development AI use cases across the agency, nearly 40 percent more than just six months earlier.
In 2025, U.S. Immigration and Customs Enforcement (ICE) deployed Mobile Fortify, a smartphone app that allows agents to photograph individuals in the field and instantly run this against government databases, regardless of citizenship status. People cannot decline to be scanned, and photos are retained for 15 years even when there is no match. ICE agents have also used the app to identify bystanders and activists observing immigration operations in Minneapolis, Chicago, and Portland.
In 2025, the administration also launched the “Catch and Revoke” initiative, aimed at canceling the visas of foreign nationals who appear to support Hamas or other designated terrorist groups. The effort relied on AI-assisted reviews of tens of thousands of social media accounts belonging to student visa holders.
There are currently no federal laws restricting how ICE uses facial recognition, and the American Immigration Council has warned that AI surveillance tools originally built for border enforcement are migrating rapidly into domestic policing, creating data pipelines that reach well beyond undocumented immigrants to affect U.S. citizens.
AI-enabled surveillance is ushering in an era of continuous monitoring in which expression can be tracked, analyzed, and disciplined at scale. In a world where anyone can be identified, profiled, and potentially arrested or sanctioned for attending a demonstration or voicing a controversial opinion online, participation in public life becomes an act of courage.
The privacy concerns stemming from mass AI surveillance are only part of the equation when it comes to our fundamental rights. From Budapest to Portland, these tools threaten our free speech. When people know or suspect they are being watched, they self-censor. They may avoid protests, moderate what they post online, or even withdraw from political activity altogether. If the use of these AI surveillance tools expands, the chilling effect on speech will only grow.
Language Models Expanding Surveillance Capacity
The surveillance systems described above primarily observe and identify. Large language models represent something qualitatively different: they do not merely collect data but interpret, classify, infer, and generate, closing the gap between data collection and meaning-making that has, until now, imposed practical limits on what surveillance can actually achieve.
LLMs can process millions of social media posts, detect sentiment, and infer political orientation at industrial levels. Research has shown that they can accurately deduce users’ demographics from social media activity alone. They break language barriers: an authoritarian government no longer needs translators for every minority language; it only needs a model. They go beyond keyword matching to better understand context, irony, and coded language, inferring intent from messages that never use flagged terms. And they can synthesize surveillance data into polished threat assessments, making AI-generated conclusions harder to challenge. Unlike bespoke surveillance infrastructure, LLMs are increasingly available as cheap commercial products or open-source tools. The democratization of AI also means the democratization of repression.
A recent paper shows that LLMs can now identify people from their anonymous online posts alone, with no names or usernames to start from. The system works by reading writing style, interests, and incidental disclosures in everyday posts, then autonomously searching the web to match them to a real identity. Until now, online anonymity has rested on what the researchers themselves call “practical obscurity”: the idea that while identifying someone from their pseudonymous posts was theoretically possible, it required hours of work by a skilled investigator and was too costly to do at scale. LLMs remove that barrier. The researchers warn that as frontier models become more capable, governments would potentially have the tools to easily unmask online critics and political dissidents at scale.
For free expression, this is a qualitative shift. Anonymous and pseudonymous speech has always been central to democratic discourse, from the Federalist Papers to contemporary whistleblowers and bloggers in repressive states. The ability to speak without revealing one’s identity has historically enabled dissent, minority viewpoints, and accountability journalism. If LLMs can strip away that anonymity, the consequences extend far beyond privacy: they reach the foundations of open public debate.
LLMs are also transforming physical surveillance in ways that go beyond monitoring what people say or write online. The same advances behind LLMs are powering Vision Language Models (VLMs), which combine image recognition with broad world-knowledge to analyze video feeds in real time across an almost unlimited range of contexts. They can be easily queried, allowing anyone to issue surveillance instructions without technical expertise. Models can analyze hours of video cheaply and may increasingly run locally, putting the technology within reach of a far wider range of actors than traditional surveillance infrastructure ever was. As the ACLU has warned, a shared risk with LLMs and VLMs is that they may be just reliable enough that operators overly trust their outputs, leading to potential injustices.
This matters because language models do not simply enhance existing surveillance — they unlock entirely new categories of it. Profiling from unstructured text, surveilling across languages without human translators, and automating intelligence analysis were not previously feasible at a population level. They are now. Democracies need laws that account for these specific capabilities, not just the cameras and databases that have dominated the policy conversation to date.
Establishing Legal Limits on AI Surveillance
The most ambitious legal response yet to address mass surveillance concerns is the European Union’s AI Act. While we have voiced serious concerns about aspects of this regulation, it nonetheless represents a global first in placing meaningful limits on AI-driven surveillance and in seeking to protect citizens from its most intrusive applications.
The Act does not create a blanket ban on mass surveillance, but it directly targets several AI practices that enable it. Article 5 prohibits AI systems that build facial-recognition databases through untargeted scraping of images from the internet or CCTV footage. The Act also bans social scoring systems that classify and penalize individuals based on behavioral profiles, where the resulting score leads to detrimental treatment in unrelated contexts or treatment that is disproportionate to the behavior. It prohibits predictive policing systems that assess criminal risk based solely on profiling or personality traits.
The Act also bans biometric categorization systems that infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. However, several of these prohibitions contain significant carve-outs for law enforcement. For example, the ban on biometric categorization explicitly excludes the labelling or filtering of lawfully acquired biometric datasets and categorizing of biometric data in the area of law enforcement, and it prohibits emotion recognition in workplaces and educational settings, though not when used in other contexts such as law enforcement or in the migration and border control context.
Critically, the Act also imposes a near-total prohibition on real-time remote biometric identification in public spaces for law enforcement, permitting it only in narrowly defined emergencies — searching for missing persons, preventing imminent terrorist threats, identifying suspects of serious crimes — and even then it requires an authorization from a judicial authority or an independent administrative authority, a fundamental rights impact assessment, and strict temporal and geographic limits.
This is not to say that the AI Act perfectly safeguards citizens against surveillance. The Act’s restrictions on real-time biometric identification do not extend equally to retrospective uses. “Post-remote” biometric identification — analyzing recorded footage after an event — is not prohibited but classified as high-risk, subject to compliance obligations, such as risk management, rather than a ban. As some organizations have warned, this loophole could allow law enforcement to review CCTV footage of a political protest and identify every participant after the fact, even without individualized suspicion. The chilling implications for freedom of assembly and expression are obvious: the knowledge that one’s face may be matched to a database days or weeks after a march can deter participation as effectively as any real-time system.
In addition, AI systems used exclusively for military, defense, or national security purposes fall entirely outside its scope — a major gap from a civil-liberties perspective, and precisely the kind of gap the Anthropic dispute exposed. And Hungary’s defiance — passing facial recognition laws that civil liberties organizations argue violate the Act’s prohibitions — raises a pointed question: can the EU enforce its own red lines against a member state determined to cross them?
The Conversation We Need to Be Having
We do not pretend to have a complete solution. The challenges outlined in this piece — the existence of surveillance tools across regime types, the qualitative escalation that large language models represent, the tension between security imperatives and civil liberties — are genuinely hard problems, and anyone who claims to have a neat answer is not taking them seriously enough.
But one thing should be clear: unchecked surveillance is not only a privacy problem; it is a free speech emergency. The cases discussed in this article involve the same underlying dynamic: the state’s capacity to watch, identify, and profile its citizens is being used, or is available to be used, to suppress, deter, or punish expression. Whether it is Hungary deploying facial recognition to identify Pride marchers, the United States scanning the social media of student visa holders, or China building LLMs that can be used to monitor Uyghur communications, the target is speech. Privacy is the mechanism; expression is the casualty.
Not having a finished answer is not a reason to avoid the conversation. It is, in fact, all the more reason to have it now, while the architecture of AI surveillance is still being built, before the political and institutional path dependencies harden, and while democratic societies still have the opportunity to shape the terms on which these technologies are deployed. The Anthropic dispute, the expansion of facial recognition, and the rollout of AI-powered immigration enforcement are not isolated episodes. They are early chapters of a story whose ending has not yet been written. This is a rare and important moment to take these risks seriously.
The EU AI Act, for all its imperfections, offers one model for what that can look like. The provisions mentioned above do not resolve every tension, but they represent a genuine attempt to draw lines: to say that certain uses of AI are incompatible with a free society and to build legal mechanisms to enforce those limits.
That effort deserves both scrutiny and respect. Other democracies would do well to consider it — not necessarily to replicate it, but to learn from it as they develop their own frameworks. At the same time, the AI Act’s shortcomings — such as the post-remote biometric loophole and the sweeping national security exemption — should serve as cautionary lessons. Future regulatory efforts must address these gaps if they are to protect not only privacy but the conditions under which free expression is possible.
The question before democratic societies is not whether AI will be used in security and governance; it will. It is whether we will insist on having this conversation openly, take the risks to our fundamental rights seriously before they become irreversible, and build the kind of legal and institutional infrastructure that can hold the line even when it is inconvenient. The window for that conversation is open. It will not stay open indefinitely. For organizations committed to free expression, the stakes could not be higher.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech at Vanderbilt University.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.




