<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Bedrock Principle: Jordi Calvet-Bademunt]]></title><description><![CDATA[Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.]]></description><link>https://www.bedrockprinciple.com/s/jordi-calvet-bademunt</link><generator>Substack</generator><lastBuildDate>Sat, 16 May 2026 12:30:34 GMT</lastBuildDate><atom:link href="https://www.bedrockprinciple.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[The Future of Free Speech]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[thebedrockprinciple@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[thebedrockprinciple@substack.com]]></itunes:email><itunes:name><![CDATA[The Future of Free Speech]]></itunes:name></itunes:owner><itunes:author><![CDATA[The Future of Free Speech]]></itunes:author><googleplay:owner><![CDATA[thebedrockprinciple@substack.com]]></googleplay:owner><googleplay:email><![CDATA[thebedrockprinciple@substack.com]]></googleplay:email><googleplay:author><![CDATA[The Future of Free Speech]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI, Mass Surveillance, and The Fight for Democratic Limits]]></title><description><![CDATA[Unchecked AI surveillance is not only a privacy problem; it is a free speech emergency.]]></description><link>https://www.bedrockprinciple.com/p/ai-mass-surveillance-and-the-fight</link><guid isPermaLink="false">https://www.bedrockprinciple.com/p/ai-mass-surveillance-and-the-fight</guid><dc:creator><![CDATA[Jordi Calvet-Bademunt]]></dc:creator><pubDate>Wed, 04 Mar 2026 20:59:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mvO3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mvO3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mvO3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!mvO3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!mvO3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!mvO3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mvO3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:648159,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.bedrockprinciple.com/i/189918656?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mvO3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!mvO3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!mvO3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!mvO3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25ab102-b24c-45fb-8bcc-c31ab922f395_2000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p style="text-align: justify;">Imagine a large language model (LLM) that can identify you from your anonymous online posts just by analyzing your writing style, interests, and passing mentions of obscure things related to your life. Now imagine those tools in the hands of a government apparatus, with no limits, checks, or safeguards.</p><p style="text-align: justify;">A recent contract dispute between Anthropic and the U.S. Department of War illustrates how urgent these issues have become.</p><p style="text-align: justify;">Anthropic, developer of the chatbot Claude, agreed in 2025 to integrate its models into classified military workflows. But it drew <strong><a href="https://www.anthropic.com/news/statement-department-of-war">two red lines:</a></strong> its systems would not be used for mass surveillance of Americans and would not power fully autonomous weapons.</p><p style="text-align: justify;">Recently, the Pentagon sought broader language permitting use for &#8220;<strong><a href="https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro">all lawful purposes</a></strong>.&#8221; When Anthropic declined to accept revised terms, it said that it lacked meaningful safeguards. The administration then moved to phase out the company&#8217;s products and <strong><a href="https://www.cbsnews.com/news/hegseth-declares-anthropic-supply-chain-risk/">designated it</a></strong> a national security supply-chain risk. <strong><a href="https://www.nbcnews.com/tech/tech-news/trump-bans-anthropic-government-use-rcna261055">OpenAI</a></strong> subsequently struck a deal with the Pentagon to deploy its models on classified networks, and <strong><a href="https://www.axios.com/2026/02/23/ai-defense-department-deal-musk-xai-grok">xAI</a></strong> recently received approval for classified work as well.</p><p style="text-align: justify;">But the Anthropic clash was not just about one company or one contract. It surfaced a structural tension that will define AI governance for decades: as technical capacity to surveil entire populations expands, who decides where the limits are and how those limits are enforced?</p><p style="text-align: justify;">Throughout history, mass surveillance was <strong><a href="https://www.scientificamerican.com/article/the-real-costs-of-cheap-surveillance/">expensive</a></strong>. It required personnel, time, and physical proximity. Governments that wanted to monitor their citizens faced practical constraints that, in effect, served as a check on the scope of state surveillance.</p><p style="text-align: justify;"><strong><a href="https://www.aclu.org/news/privacy-technology/machine-surveillance-is-being-super-charged-by-large-ai-models">Artificial intelligence</a></strong> and the ease of large-scale data collection are dissolving those costs and constraints. The capacity to collect, process, and interpret human behavior at scale is expanding so rapidly that the practical limits that once bounded surveillance are falling away, and legal boundaries are struggling to keep pace.</p><p style="text-align: justify;">The stakes extend beyond privacy: mass surveillance, left unchecked, is a direct threat to freedom of expression: the right to speak, to assemble, and to dissent without fear of identification and retaliation. When citizens know that their faces, movements, and words are being recorded and analyzed, the calculus of speaking out changes.</p><p style="text-align: justify;">The question is no longer just &#8220;do I have the right to say this?&#8221; but &#8220;will I be identified for saying it, and what might follow?&#8221; AI-powered surveillance is reshaping that calculus, making the chilling effect on expression not occasional but pervasive.</p><h1 style="text-align: justify;"><strong>The Global Spread of AI Surveillance</strong></h1><p style="text-align: justify;">This is not a hypothetical debate about future capabilities. Around the world, governments are already deploying AI systems to identify individuals in public spaces, aggregate data across databases, analyze online speech, and generate risk assessments at scale.</p><p style="text-align: justify;">Surveillance infrastructures are expanding across political systems, sometimes in the name of security, public order, or administrative efficiency. The pattern is visible across political systems &#8212; authoritarian, hybrid, and democratic &#8212; and a closer look at each category reveals how surveillance tools spread across borders and down the spectrum of regime types.</p><h3 style="text-align: justify;"><strong>Authoritarian Regimes: The Surveillance Laboratory</strong></h3><p style="text-align: justify;">AI surveillance is most advanced in authoritarian states, where it has been integrated into the architecture of political control. In China, the government has deployed up to 600 million <strong><a href="https://www.cnn.com/2025/12/04/china/china-ai-censorship-surveillance-report-intl-hnk">surveillance cameras</a></strong>, many equipped with AI-powered facial recognition. In Xinjiang, this infrastructure reached its most extreme expression: the Integrated Joint Operations Platform <strong><a href="https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass">aggregates data</a></strong> from facial recognition cameras, iris scanners, checkpoints, and financial records to flag behavior it deems suspicious and trigger detention automatically. Researchers have <strong><a href="https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass">described</a></strong> the region as a frontline laboratory for AI-driven social control.</p><p style="text-align: justify;">Critically, the technology does not stay in Xinjiang or in China. Chinese firms have <strong><a href="https://cset.georgetown.edu/wp-content/uploads/CSET-Designing-Alternatives-to-Chinas-Surveillance-State.pdf">exported surveillance technology</a></strong> to over 80 countries. Hikvision and Dahua, both partly state-owned, together <strong><a href="https://www.dw.com/en/western-countries-are-banning-chinese-tech-why-is-it-still-spreading/a-65106709#:~:text=Do%20you%20have%20a%20security,according%20to%20recent%20media%20reports.">dominate</a></strong> the global surveillance camera market. Human rights organizations have documented how these systems <strong><a href="https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/what-is-driving-the-adoption-of-chinese-surveillance-technology-in-africa/">have been used</a></strong> to monitor dissidents and suppress opposition in Kenya, Uganda, Ethiopia, and Ecuador, among others. Now, Chinese companies are developing large language models for <strong><a href="https://www.cnn.com/2025/12/04/china/china-ai-censorship-surveillance-report-intl-hnk">minority languages</a></strong>, including Uyghur and Tibetan, that could be used to monitor communications and control the information those communities receive.</p><p style="text-align: justify;">In Iran, a 2025 United Nations report documented the regime&#8217;s use of <strong><a href="https://www.cnn.com/2025/03/14/middleeast/iran-nazer-app-un-report-intl-latam">drones, facial recognition, and a government-backed app</a></strong> called &#8220;Nazer&#8221; to enforce mandatory hijab laws. Facial recognition cameras have been installed at university entrances, and parliament approved legislation authorizing expanded AI surveillance of women, with penalties of up to ten years in prison.</p><h3 style="text-align: justify;"><strong>Democracies and Hybrid Regimes: The Migration of Surveillance Tools</strong></h3><p style="text-align: justify;">But AI-driven surveillance is not constrained to authoritarian regimes. It has also gained ground elsewhere, including in Europe and the Americas. In March 2025, the Hungarian Parliament <strong><a href="https://www.pbs.org/newshour/world/hungary-passes-anti-lgbtq-law-banning-pride-events">passed amendments</a></strong> banning LGBTQ+ gatherings and expanding facial recognition surveillance to identify attendees. Ahead of Budapest Pride in June, authorities <strong><a href="https://www.npr.org/2025/06/28/nx-s1-5449685/hungary-budapest-pride-defies-ban">installed</a></strong> additional cameras across the city center to identify participants. <strong><a href="https://edri.org/our-work/the-commission-must-uphold-the-ai-act-and-fundamental-freedoms-in-hungary/">Civil&#8209;society groups</a></strong> and <strong><a href="https://www.euronews.com/my-europe/2025/03/26/exclusive-hungarys-gay-pride-surveillance-would-breach-the-eus-ai-act-says-leading-mep">MEPs</a></strong> urged the European Commission to investigate whether Hungary&#8217;s actions violated the EU AI Act&#8217;s prohibition on real&#8209;time biometric identification in public spaces, and Commission officials indicated they would <strong><a href="https://www.politico.eu/article/hungary-eu-watchlist-facial-recognition-surveillance-lgbtq-pride/">assess compliance</a></strong> with the AI Act.</p><p style="text-align: justify;">The use of AI for policing is expanding even in countries with strong democratic traditions. Ahead of the 2024 Paris Olympics, France became the first EU country to <strong><a href="https://www.loc.gov/item/global-legal-monitor/2023-11-15/france-new-law-establishes-legal-framework-for-2024-olympic-and-paralympic-games/">legalize algorithmic video surveillance</a></strong> in public spaces, authorizing AI systems to scan crowds in real time for suspicious behavior. After the Games, the Paris police prefect <strong><a href="https://www.lemonde.fr/societe/article/2024/09/25/le-prefet-de-police-de-paris-se-dit-favorable-a-une-prolongation-du-recours-a-la-videosurveillance-algorithmique_6333125_3224.html">publicly backed</a></strong> making the technology permanent, and the incoming prime minister at the time, Michel Barnier, pledged to generalize the methods used during the Olympics.</p><p style="text-align: justify;">In the United Kingdom, the Metropolitan Police <strong><a href="https://www.bbc.com/news/articles/c07x21jlnndo">scanned</a></strong> more than 3.5 million faces in 2025 and is <strong><a href="https://www.gov.uk/government/news/live-facial-recognition-technology-to-catch-high-harm-offenders">rolling out</a></strong> facial recognition vans to forces across England and Wales. Civil liberties groups <strong><a href="https://bigbrotherwatch.org.uk/press-coverage/big-brother-watch-responds-to-the-metropolitan-polices-2025-live-facial-recognition-report/">note</a></strong> that 80 percent of innocent people wrongly flagged by the system were Black, and that the entire expansion is proceeding without any dedicated legislation or a vote in Parliament.</p><p style="text-align: justify;">And in the United States, the federal government has dramatically <strong><a href="https://www.nytimes.com/2026/01/30/technology/tech-ice-facial-recognition-palantir.html">accelerated</a></strong> AI-powered surveillance, with immigration enforcement as the primary driver. The Department of Homeland Security&#8217;s <strong><a href="https://www.techpolicy.press/dhs-ai-surveillance-arsenal-grows-as-agency-defies-courts/">latest AI inventory</a></strong> disclosed more than 200 active or in-development AI use cases across the agency, nearly 40 percent more than just six months earlier.</p><p style="text-align: justify;">In 2025, U.S. Immigration and Customs Enforcement (ICE) <strong><a href="https://www.wired.com/story/mobile-fortify-face-recognition-nec-ice-cbp/#:~:text=On%20Wednesday%2C%20the%20Department%20of,was%20developed%20partially%20in%20house.">deployed</a></strong> Mobile Fortify, a smartphone app that allows agents to photograph individuals in the field and instantly run this against government databases, regardless of citizenship status. People <strong><a href="https://www.npr.org/2025/11/08/nx-s1-5585691/ice-facial-recognition-immigration-tracking-spyware">cannot decline</a></strong> to be scanned, and photos are retained for 15 years even when there is no match. ICE agents <strong><a href="https://www.nbcnews.com/tech/security/ice-agent-facial-recognition-video-protest-movile-fortify-photo-rcna257331">have also used</a></strong> the app to identify bystanders and activists observing immigration operations in Minneapolis, Chicago, and Portland. </p><p style="text-align: justify;">In 2025, the administration also launched the &#8220;<strong><a href="https://www.axios.com/2025/03/06/state-department-ai-revoke-foreign-student-visas-hamas">Catch and Revoke</a></strong>&#8221; initiative, aimed at canceling the visas of foreign nationals who appear to support Hamas or other designated terrorist groups. The effort relied on AI-assisted reviews of tens of thousands of social media accounts belonging to student visa holders.</p><p style="text-align: justify;">There are currently <strong><a href="https://www.cato.org/commentary/ice-cbps-use-facial-recognition-technology-needs-guardrails-now">no federal laws</a></strong> restricting how ICE uses facial recognition, and the American Immigration Council <strong><a href="https://www.americanimmigrationcouncil.org/blog/ice-ai-surveillance-tracking-americans/">has warned</a></strong> that AI surveillance tools originally built for border enforcement are migrating rapidly into domestic policing, creating data pipelines that reach well beyond undocumented immigrants to affect U.S. citizens.</p><p style="text-align: justify;">AI-enabled surveillance is ushering in an era of continuous monitoring in which expression can be tracked, analyzed, and disciplined at scale. In a world where anyone can be identified, profiled, and potentially arrested or sanctioned for attending a demonstration or voicing a controversial opinion online, participation in public life becomes an act of courage.</p><p style="text-align: justify;">The privacy concerns stemming from mass AI surveillance are only part of the equation when it comes to our fundamental rights. From Budapest to Portland, these tools threaten our free speech. When people know or suspect they are being watched, they self-censor. They may avoid protests, moderate what they post online, or even withdraw from political activity altogether. If the use of these AI surveillance tools expands, the chilling effect on speech will only grow.</p><h1><strong>Language Models Expanding Surveillance Capacity</strong></h1><p style="text-align: justify;">The surveillance systems described above primarily observe and identify. Large language models represent something qualitatively different: they do not merely collect data but interpret, classify, infer, and generate, closing the gap between data collection and meaning-making that has, until now, imposed practical limits on what surveillance can actually achieve.</p><p style="text-align: justify;">LLMs can process millions of social media posts, detect sentiment, and infer political orientation at industrial levels. <strong><a href="https://arxiv.org/abs/2507.12372">Research</a></strong> has shown that they can accurately deduce users&#8217; demographics from social media activity alone. They break language barriers: an authoritarian government no longer needs translators for every minority language; it only needs a model. They go beyond keyword matching to better understand context, irony, and coded language, inferring intent from messages that never use flagged terms. And they can synthesize surveillance data into polished threat assessments, making AI-generated conclusions harder to challenge. Unlike bespoke surveillance infrastructure, LLMs are increasingly available as cheap commercial products or open-source tools. The democratization of AI also means the democratization of repression.</p><p style="text-align: justify;">A <strong><a href="https://arxiv.org/pdf/2602.16800">recent paper</a></strong> shows that LLMs can now identify people from their anonymous online posts alone, with no names or usernames to start from. The system works by reading writing style, interests, and incidental disclosures in everyday posts, then autonomously searching the web to match them to a real identity. Until now, online anonymity has rested on what the researchers themselves call &#8220;practical obscurity&#8221;: the idea that while identifying someone from their pseudonymous posts was theoretically possible, it required hours of work by a skilled investigator and was too costly to do at scale. LLMs remove that barrier. The researchers <strong><a href="https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surprising-accuracy/">warn</a></strong> that as frontier models become more capable, governments would potentially have the tools to easily unmask online critics and political dissidents at scale.</p><p style="text-align: justify;">For free expression, this is a qualitative shift. Anonymous and pseudonymous speech has always been central to democratic discourse, from the Federalist Papers to contemporary whistleblowers and bloggers in repressive states. The ability to speak without revealing one&#8217;s identity has historically enabled dissent, minority viewpoints, and accountability journalism. If LLMs can strip away that anonymity, the consequences extend far beyond privacy: they reach the foundations of open public debate.</p><p style="text-align: justify;">LLMs are also transforming physical surveillance in ways that go beyond monitoring what people say or write online. The <strong><a href="https://www.aclu.org/news/privacy-technology/machine-surveillance-is-being-super-charged-by-large-ai-models">same advances</a></strong> behind LLMs are powering Vision Language Models (VLMs), which combine image recognition with broad world-knowledge to analyze video feeds in real time across an almost unlimited range of contexts. They can be easily queried, allowing anyone to issue surveillance instructions without technical expertise. Models can analyze hours of video cheaply and may increasingly run locally, putting the technology within reach of a far wider range of actors than traditional surveillance infrastructure ever was. As the ACLU <strong><a href="https://www.aclu.org/news/privacy-technology/machine-surveillance-is-being-super-charged-by-large-ai-models">has warned</a></strong>, a shared risk with LLMs and VLMs is that they may be just reliable enough that operators overly trust their outputs, leading to potential injustices.</p><p style="text-align: justify;">This matters because language models do not simply enhance existing surveillance &#8212; they unlock entirely new categories of it. Profiling from unstructured text, surveilling across languages without human translators, and automating intelligence analysis were not previously feasible at a population level. They are now. Democracies need laws that account for these specific capabilities, not just the cameras and databases that have dominated the policy conversation to date.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.bedrockprinciple.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Bedrock Principle is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>Establishing Legal Limits on AI Surveillance</strong></h1><p style="text-align: justify;">The most ambitious legal response yet to address mass surveillance concerns is the European Union&#8217;s AI Act. While we have voiced serious concerns <strong><a href="https://www.bedrockprinciple.com/p/who-decides-whats-good-for-society">about</a></strong> <strong><a href="https://www.techpolicy.press/dutch-warning-on-chatbots-echoes-trump-attacks-on-woke-ai/">aspects</a></strong> of this regulation, it nonetheless represents a global first in placing meaningful limits on AI-driven surveillance and in seeking to protect citizens from its most intrusive applications.</p><p style="text-align: justify;">The Act does not create a blanket ban on mass surveillance, but it directly targets several AI practices that enable it. Article 5 prohibits AI systems that build facial-recognition databases through untargeted scraping of images from the internet or CCTV footage. The Act also bans social scoring systems that classify and penalize individuals based on behavioral profiles, where the resulting score leads to detrimental treatment in unrelated contexts or treatment that is disproportionate to the behavior. It prohibits predictive policing systems that assess criminal risk based solely on profiling or personality traits.</p><p style="text-align: justify;">The Act also bans biometric categorization systems that infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. However, several of these prohibitions contain significant carve-outs for law enforcement. For example, the ban on biometric categorization explicitly excludes the labelling or filtering of lawfully acquired biometric datasets and categorizing of biometric data in the area of law enforcement, and it prohibits emotion recognition in workplaces and educational settings, though not when used in other contexts such as law enforcement or in the migration and border control context.</p><p style="text-align: justify;">Critically, the Act also imposes a near-total prohibition on real-time remote biometric identification in public spaces for law enforcement, permitting it only in narrowly defined emergencies &#8212; searching for missing persons, preventing imminent terrorist threats, identifying suspects of serious crimes &#8212; and even then it requires an authorization from a judicial authority or an independent administrative authority, a fundamental rights impact assessment, and strict temporal and geographic limits.</p><p style="text-align: justify;">This is <strong><a href="https://verfassungsblog.de/ai-act-and-the-prohibition-of-real-time-biometric-identification/">not to say</a></strong> that the AI Act perfectly safeguards citizens against surveillance. The Act&#8217;s restrictions on real-time biometric identification do not extend equally to retrospective uses. &#8220;Post-remote&#8221; biometric identification &#8212; analyzing recorded footage after an event &#8212; is not prohibited but classified as high-risk, subject to compliance obligations, such as risk management, rather than a ban. As some <strong><a href="https://edri.org/our-work/the-ai-act-isnt-enough-closing-the-dangerous-loopholes-that-enable-rights-violations/">organizations</a></strong> have warned, this loophole could allow law enforcement to review CCTV footage of a political protest and identify every participant after the fact, even without individualized suspicion. The chilling implications for freedom of assembly and expression are obvious: the knowledge that one&#8217;s face may be matched to a database days or weeks after a march can deter participation as effectively as any real-time system.</p><p style="text-align: justify;">In addition, AI systems used exclusively for military, defense, or national security purposes fall entirely outside its scope &#8212; a major gap from a <strong><a href="https://www.techpolicy.press/when-national-security-becomes-a-shield-for-evading-ai-accountability/">civil-liberties perspective</a></strong>, and precisely the kind of gap the Anthropic dispute exposed. And Hungary&#8217;s defiance &#8212; passing facial recognition laws that <strong><a href="https://ecnl.org/news/hungarys-new-biometric-surveillance-laws-violate-ai-act">civil liberties organizations argue</a></strong> violate the Act&#8217;s prohibitions &#8212; raises a pointed question: can the EU enforce its own red lines against a member state determined to cross them?</p><h1><strong>The Conversation We Need to Be Having</strong></h1><p style="text-align: justify;">We do not pretend to have a complete solution. The challenges outlined in this piece &#8212; the existence of surveillance tools across regime types, the qualitative escalation that large language models represent, the tension between security imperatives and civil liberties &#8212; are genuinely hard problems, and anyone who claims to have a neat answer is not taking them seriously enough.</p><p style="text-align: justify;">But one thing should be clear: unchecked surveillance is not only a privacy problem; it is a free speech emergency. The cases discussed in this article involve the same underlying dynamic: the state&#8217;s capacity to watch, identify, and profile its citizens is being used, or is available to be used, to suppress, deter, or punish expression. Whether it is Hungary deploying facial recognition to identify Pride marchers, the United States scanning the social media of student visa holders, or China building LLMs that can be used to monitor Uyghur communications, the target is speech. Privacy is the mechanism; expression is the casualty.</p><p style="text-align: justify;">Not having a finished answer is not a reason to avoid the conversation. It is, in fact, all the more reason to have it now, while the architecture of AI surveillance is still being built, before the political and institutional path dependencies harden, and while democratic societies still have the opportunity to shape the terms on which these technologies are deployed. The Anthropic dispute, the expansion of facial recognition, and the rollout of AI-powered immigration enforcement are not isolated episodes. They are early chapters of a story whose ending has not yet been written. This is a rare and important moment to take these risks seriously.</p><p style="text-align: justify;">The EU AI Act, for all its imperfections, offers one model for what that can look like. The provisions mentioned above do not resolve every tension, but they represent a genuine attempt to draw lines: to say that certain uses of AI are incompatible with a free society and to build legal mechanisms to enforce those limits. </p><p style="text-align: justify;">That effort deserves both scrutiny and respect. Other democracies would do well to consider it &#8212; not necessarily to replicate it, but to learn from it as they develop their own frameworks. At the same time, the AI Act&#8217;s shortcomings &#8212; such as the post-remote biometric loophole and the sweeping national security exemption &#8212; should serve as cautionary lessons. Future regulatory efforts must address these gaps if they are to protect not only privacy but the conditions under which free expression is possible.</p><p style="text-align: justify;">The question before democratic societies is not whether AI will be used in security and governance; it will. It is whether we will insist on having this conversation openly, take the risks to our fundamental rights seriously before they become irreversible, and build the kind of legal and institutional infrastructure that can hold the line even when it is inconvenient. The window for that conversation is open. It will not stay open indefinitely. For organizations committed to free expression, the stakes could not be higher.</p><div><hr></div><p style="text-align: justify;"><em><strong><a href="https://futurefreespeech.org/who-we-are/jordi-calvet-bademunt/">Jordi Calvet-Bademunt</a> </strong>is a Senior Research Fellow at The Future of Free Speech at Vanderbilt University. </em></p><p style="text-align: justify;"><em><strong><a href="https://futurefreespeech.org/who-we-are/">Isabelle Anzabi</a></strong> is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bedrockprinciple.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bedrockprinciple.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bedrockprinciple.com/p/ai-mass-surveillance-and-the-fight?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bedrockprinciple.com/p/ai-mass-surveillance-and-the-fight?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p style="text-align: justify;"></p><p style="text-align: justify;"><em><br><br></em></p>]]></content:encoded></item><item><title><![CDATA[The Anti-"Woke" AI Agenda: Free Speech or State Speech?]]></title><description><![CDATA[The Trump Administration&#8217;s AI Action Plan demands &#8220;neutral&#8221; models&#8212;but neutrality defined by the government risks silencing dissent.]]></description><link>https://www.bedrockprinciple.com/p/the-anti-woke-ai-agenda-free-speech</link><guid isPermaLink="false">https://www.bedrockprinciple.com/p/the-anti-woke-ai-agenda-free-speech</guid><dc:creator><![CDATA[Isabelle Anzabi]]></dc:creator><pubDate>Wed, 23 Jul 2025 21:56:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!e3YL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!e3YL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!e3YL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!e3YL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!e3YL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!e3YL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!e3YL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2080517,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.bedrockprinciple.com/i/169088003?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!e3YL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!e3YL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!e3YL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!e3YL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5697fb7-0ed4-426a-9f86-d768ff8098b6_2000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>At first glance, the Trump Administration&#8217;s &#8220;<strong><a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">Artificial Intelligence Action Plan</a></strong>&#8221; appears to champion free expression. But in practice, it could risk distorting the information ecosystem it seeks to safeguard.</p><p>The AI Action Plan, launched today, outlines the administration&#8217;s approach to shaping national AI policy. Beyond setting procurement standards, the Plan directs revisions to federal AI frameworks to remove references to topics like misinformation, Diversity, Equity, and Inclusion (DEI), and climate change, and calls for evaluating foreign models for ideological alignment.</p><p>While framed as a defense of free speech and American values, advocates should be vigilant to avoid a scenario where the Plan reshapes the AI landscape along narrowly defined ideological lines.</p><p>For instance, the procurement measure mandates that AI systems adopted by federal agencies be politically neutral and &#8220;free from top-down ideological bias.&#8221; Developers of frontier large language models (LLMs) seeking federal contracts must demonstrate that their systems &#8220;objectively reflect truth rather than social engineering agendas.&#8221; The Plan frames these requirements as protections for freedom of speech and expression, asserting that &#8220;U.S. government policy must not interfere&#8221; with those rights.</p><p>While we are pleased to see free speech emphasized in the Plan, such a policy raises some red flags that free expression advocates should closely monitor. Generative AI systems play a growing role in shaping public discourse by amplifying certain viewpoints, sidelining others, and influencing how citizens engage with information. The Administration&#8217;s procurement choices affect not only government institutions but also everyday citizens&#8212;either because they use the models directly or because those models are integrated into public tools. That&#8217;s precisely why freedom of expression protections must apply not only to speakers, but also to users, developers, and the systems they create.</p><p>The Plan also introduces a dangerous ambiguity that could chill free expression. It seeks to weed out AI models using vague and undefined terms like &#8220;ideological bias&#8221; and &#8220;social engineering&#8221; &#8212; concepts that are heavily politicized. In context, these terms appear to target what the Administration has <strong><a href="https://abcnews.go.com/Politics/trumps-war-woke-sides-issue-dividing-country/story?id=121125797">frequently characterized</a></strong> as &#8220;woke&#8221; or progressive values. While this framing may appeal to some constituencies, conditioning access to federal contracts on adherence to a vague, ideologically loaded conception of &#8220;truth&#8221; risks encouraging viewpoint conformity.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bedrockprinciple.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bedrockprinciple.com/subscribe?"><span>Subscribe now</span></a></p><p>That concern is deepened by two new provisions in the Plan&#8217;s implementation section. First, the Department of Commerce, through the National Institute of Standards and Technology (NIST), is directed to revise the widely used AI Risk Management Framework to remove all references to misinformation, DEI, and climate change.</p><p>It is no secret that some initiatives focused on misinformation and DEI have raised concerns regarding free speech&#8212;for example, when AI systems previously <strong><a href="https://futurefreespeech.org/wp-content/uploads/2023/12/FFS_AI-Policies_Formatting.pdf">refused to generate content</a></strong> related to the COVID-19 lab-leak theory or certain critiques of identity-based policies. </p><p>At the same time, the Administration should not substitute one specific agenda for another. The AI Action Plan&#8217;s objective should be understood and evaluated in the context of the broader efforts to <strong><a href="https://apnews.com/article/trump-dei-tuskegee-airmen-women-war-history-88a92c8485281d7c088c5eafe5dbf002">eliminate historical references</a></strong> to DEI across federal agencies, <strong><a href="https://www.npr.org/2025/05/21/nx-s1-5389638/trump-executive-actions-american-history-culture">revise the definition</a></strong> of American history in education and cultural institutions, and <strong><a href="https://www.whitehouse.gov/presidential-actions/2025/01/defending-women-from-gender-ideology-extremism-and-restoring-biological-truth-to-the-federal-government/">condition funding</a></strong> on ideological conformity.</p><p>Second, the Action Plan tasks NIST&#8217;s Center for AI Standards and Innovation (CAISI) with evaluating Chinese frontier models for alignment with Communist Party talking points and censorship directives. </p><p>In isolation, this might be seen as a reasonable national security measure. But paired with the Administration&#8217;s own efforts to steer domestic models toward &#8220;American values,&#8221; it reinforces a troubling symmetry where AI systems must be policed for ideological compliance, whether foreign or domestic. The contrast with CCP censorship is weakened if the U.S. government adopts its own content-based tests.</p><p>When AI companies anticipate political backlash&#8212;or loss of federal contracts&#8212;for generating certain responses, they may self-censor preemptively. That chilling effect not only narrows the expressive range of AI models, but also deprives users of lawful information and perspectives. This raises constitutional concerns not just for developers, but for the broader public&#8217;s right to receive diverse and uncensored speech.</p><p>In the abstract, requiring AI to be neutral may seem like a worthy goal. But neutral, according to whom? If the government has the power to require AI systems to avoid &#8220;woke&#8221; content today, could a future administration require that they reflect more progressive values tomorrow? If each administration can dictate political neutrality based on its own ideological biases, then it will simply be a form of government-aligned speech standards in disguise.</p><p>At The Future of Free Speech, we&#8217;ve spent the past 18 months analyzing how major AI systems handle legal but controversial prompts. Many <strong><a href="https://futurefreespeech.org/report-freedom-of-expression-in-generative-ai-a-snapshot-of-content-policies/">initially refused</a></strong> to respond to questions that are legal, public, and newsworthy. But things have changed. Our 2025 tests <strong><a href="https://www.bedrockprinciple.com/p/one-year-later-ai-chatbots-show-progress">show</a></strong> that models now respond to approximately 75% of our prompts, up from just 50% the previous year.</p><p>Preliminary evidence suggests that models are becoming <em>less</em> &#8220;woke&#8221; in the sense that they are less likely to silence &#8220;offensive&#8221; or controversial ideas. This shift is not the result of political mandates, but rather due to transparency efforts, civil society engagement, and healthy market competition. During this time, major developers have committed to <strong><a href="https://openai.com/global-affairs/intellectual-freedom-by-design/">intellectual freedom</a> </strong>and to <strong><a href="https://www.anthropic.com/news/claude-3-7-sonnet">reducing chatbots&#8217; refusals</a></strong> to generate content.</p><p>This progress shows that improvements in free expression can&#8212;and should&#8212;happen without coercive government intervention. Generative AI can serve democratic society best when companies retain editorial independence and users have access to a wide range of viewpoints.</p><p>In our formal <strong><a href="https://www.bedrockprinciple.com/p/the-future-of-free-speechs-comments">submission</a></strong> to the <strong><a href="https://www.whitehouse.gov/briefings-statements/2025/02/public-comment-invited-on-artificial-intelligence-action-plan/">White House</a></strong> earlier this year, we recommended two core safeguards:</p><ul><li><p><strong>Avoid jawboning</strong>. Both formal and informal pressure on AI companies can chill speech. Transparency around official communications is essential.</p></li><li><p><strong>Support open-source AI</strong>. Open models offer a counterbalance to centralized control, enabling diverse communities to shape systems according to their values and needs.</p></li></ul><p>The AI Action Plan&#8217;s support for open-source and open-weight AI is encouraging in this regard. It acknowledges that open models enhance innovation, security, and academic research, especially for developers and institutions that cannot send sensitive data to proprietary systems.</p><p>We agree with the Plan&#8217;s recognition that &#8220;America needs leading open models founded on American values&#8221;&#8212;if those values include freedom of expression, pluralism, due process, and open debate.</p><p>But support for open models must not become a backdoor for enforcing ideological conformity. True pluralism requires that models trained and deployed under government support reflect not a single &#8220;truth,&#8221; but the competing perspectives that define a free society.</p><p>The Trump Administration&#8217;s AI Action Plan claims to position the U.S. as a global leader. But leadership in AI must also mean leadership in rights-based governance. If the U.S. wishes to model democratic values in the age of generative AI, it must resist the temptation to enforce political orthodoxy&#8212;even in the name of neutrality.</p><div><hr></div><p><em><strong><a href="https://futurefreespeech.org/who-we-are/">Isabelle Anzabi</a></strong> is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.<br><br><strong><a href="https://futurefreespeech.org/who-we-are/jordi-calvet-bademunt/">Jordi Calvet-Bademunt</a> </strong>is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bedrockprinciple.com/p/the-anti-woke-ai-agenda-free-speech?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bedrockprinciple.com/p/the-anti-woke-ai-agenda-free-speech?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.bedrockprinciple.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>The Bedrock Principle</strong></em> is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Who Decides What's Good for Society? AI, the DSA, and the Future of Expression in Europe ]]></title><description><![CDATA[Why the EU&#8217;s efforts to regulate AI may have unintended consequences for free expression.]]></description><link>https://www.bedrockprinciple.com/p/who-decides-whats-good-for-society</link><guid isPermaLink="false">https://www.bedrockprinciple.com/p/who-decides-whats-good-for-society</guid><dc:creator><![CDATA[Jordi Calvet-Bademunt]]></dc:creator><pubDate>Thu, 12 Jun 2025 16:12:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!c6E3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c6E3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c6E3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!c6E3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!c6E3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!c6E3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c6E3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:487206,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.bedrockprinciple.com/i/165794888?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!c6E3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!c6E3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!c6E3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!c6E3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e476132-8a5d-495e-a9e6-e0d96e69b702_2000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>On June 3, 2025, I participated in the Global Network Initiative &amp; Digital Trust &amp; Safety Partnership EU Rights &amp; Risks Stakeholder Engagement Forum, which brought together civil society and tech companies. What follows is an adapted version of my remarks in the context of a panel on Digital Services Act (DSA) risk assessment developments.</em></p><div><hr></div><p>It is no secret that AI is being infused into all types of services. Meta has <strong><a href="https://www.cnbc.com/2024/04/18/meta-ai-assistant-comes-to-whatsapp-instagram-facebook-and-messenger.html">integrated</a></strong> its generative AI models into Instagram, Facebook, and WhatsApp. Google increasingly relies on AI-generated <strong><a href="https://support.google.com/websearch/answer/14901683">summaries</a></strong> to answer users' queries. Companies like OpenAI and DeepSeek have also introduced new search engines powered by generative AI.</p><p>But what happens if the legal safeguards we erect to control this technology inadvertently create excessive limits on the content and ideas we can create and discover? How we regulate these increasingly significant services matters profoundly for freedom of expression and access to information.</p><p>In Europe, AI is regulated by an expanding array of rules. The relationship between these regulations and AI is evident in cases such as the EU AI Act adopted last year. In other instances, there has been extensive discussion about how AI interacts with other major frameworks, such as privacy rules (GDPR) or ongoing debates around copyright rules. However, there is surprisingly little discussion about the intersection between AI and the Digital Services Act (DSA), despite the clear relevance. Nonetheless, there is no doubt that the DSA will significantly govern important AI services.</p><p>The European Commission has made clear that AI services can fall under the scope of the DSA. The DSA serves as the EU&#8217;s online safety rulebook, imposing obligations related to transparency, data access, and due process. Crucially, it requires very large online platforms (VLOPs) and search engines (VLOSEs) to mitigate societal or "systemic" risks. The AI Act contains similar provisions targeting powerful general-purpose AI models (GPAI). I (and others) have <strong><a href="https://www.techpolicy.press/safeguarding-freedom-of-expression-in-the-ai-era/">previously</a> <a href="https://www.techpolicy.press/the-european-commissions-approach-to-dsa-systemic-risk-is-concerning-for-freedom-of-expression/">highlighted</a></strong> the <strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5161173">concerns</a></strong> these obligations pose to freedom of expression.</p><p>The Commission expects <strong><a href="https://www.europarl.europa.eu/doceo/document/E-10-2025-001460-ASW_EN.html">Meta</a></strong> to submit its risk assessment soon regarding the AI features deployed across its platforms. This involves evaluating how AI integration might impact societal interests such as electoral processes, dissemination of illegal content, or users' well-being, as well as identifying suitable measures to mitigate these risks. Public information also indicates that the Commission is considering whether <strong><a href="https://www.mlex.com/mlex/artificial-intelligence/articles/2332484/chatgpt-faces-possible-designation-as-a-systemic-platform-under-eu-digital-law">ChatGPT</a></strong> should be subject to this obligation. Commissioner Virkkunen noted that <strong><a href="https://www.europarl.europa.eu/doceo/document/E-10-2025-000394-ASW_EN.html">DeepSeek</a></strong> could also fall under the DSA if integrated into other platforms.</p><p>This raises a key question: What do these regulations imply for freedom of expression and access to information? The short answer is that they pose significant dangers.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.bedrockprinciple.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to <em><strong>The Bedrock Principle</strong></em> for free:</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Given the current political context, it is crucial to emphasize that most Europeans live under strong democratic governance and the rule of law. In most EU countries, and certainly at the EU level, there are robust checks and balances that are not available elsewhere. It would thus be inappropriate to label the DSA or AI Act as censorship tools, although they certainly contain problematic aspects.</p><p>As the European Commission begins to apply the DSA&#8212;and the AI Act&#8212;to AI services, it is crucial to closely scrutinize the problematic aspects, particularly the systemic risk obligations.</p><p>On the surface, systemic risk obligations seem beneficial. The DSA requires platforms to tackle significant societal issues, including protecting electoral processes, citizens' well-being, public security, and fundamental rights. The AI Act goes even further, demanding that powerful AI models address, among other issues, potential negative effects on "society as a whole."</p><p>The central concern, however, is that this well-intentioned legislation places enormous discretion in the hands of regulators&#8212;in this case, the European Commission. Consider this: What exactly is good for "society as a whole"? This varies widely from individual to individual, yet the European Commission is empowered to define it.</p><p>We have already witnessed troubling indications. Politico reported that Commissioner Breton, who previously oversaw the DSA, <strong><a href="https://www.politico.eu/article/european-union-digital-services-act-dsa-thierry-breton/">pressured</a></strong> enforcement teams to act against X and expressed concerns about <strong><a href="https://www.accessnow.org/press-release/dsa-gaza-online-content/">Gaza-related content</a></strong> and Elon Musk&#8217;s interview with <strong><a href="https://futurefreespeech.org/open-letter-to-thierry-breton-on-the-dsas-threats-to-free-speech/">Donald Trump</a></strong>. Recently, the new Commissioner in charge of the DSA, who has been more cautious, remarked that DeepSeek&#8217;s AI model involved "<strong><a href="https://www.europarl.europa.eu/doceo/document/E-10-2025-000394-ASW_EN.html">ideological censorship</a></strong>," suggesting potential conflict with EU principles and the DSA.</p><p>Many undoubtedly disagree with the views expressed by Trump, Musk, or the Chinese Communist Party. However, the central issue is not agreement or disagreement, but whether we want public authorities to determine which content we may see, prioritize, or suppress.</p><p>What becomes of content promoting protests or advocating regional secession? Could that constitute systemic risk, and would it harm "society as a whole," as articulated by the AI Act? France&#8217;s shutdown of TikTok in <strong><a href="https://www.politico.eu/article/french-tiktok-ban-new-caledonia-overseas-territory-dangerous-precedent-macron-eu/">New Caledonia</a></strong>&#8212;though outside the DSA&#8217;s and the AI Act&#8217;s remit&#8212;raised serious <strong><a href="https://verfassungsblog.de/the-tiktok-ban-is-dead-long-live-the-ban-france-tiktok-kanaky/">concerns</a></strong> about the misuse of government powers and the undermining of the rule of law, even in EU democracies.</p><p>A statement related to the Gaza conflict could be interpreted as antisemitic or supportive of the Palestinian cause, depending on one's perspective. Similarly, certain remarks might be labeled transphobic or feminist depending on interpretation. Looking forward, if the far-right ascends politically, pro-LGBTQ or abortion-related content may face greater suppression risks.</p><p>Having a central authority determining which content we can and cannot generate or access in AI services would be deeply troubling for us as individuals and for the health of our democracies.</p><p>As enforcement of the DSA and AI Act intensifies, vigilant scrutiny of the European Commission&#8217;s actions is essential. While there is no reason to doubt the Commission's competence, it remains a political body subject to biases like any other.</p><p>During the DSA&#8217;s adoption, the UN Special Rapporteur on Freedom of Expression <a href="https://www.ohchr.org/en/documents/thematic-reports/ahrc4725-disinformation-and-freedom-opinion-and-expression-report">warned</a> about the vagueness of systemic risk obligations. Although the European Court of Justice will inevitably oversee the implementation of the DSA and AI Act, judicial oversight takes time, and significant damage could occur in the interim.</p><p>As Europe revisits its digital policy framework, narrowing systemic risk obligations merits serious consideration. Meanwhile, freedom of expression advocates must ensure Europe does not inadvertently compromise fundamental rights for an undefined, subjective notion of the greater good.</p><div><hr></div><p><em><strong><a href="https://futurefreespeech.org/who-we-are/jordi-calvet-bademunt/">Jordi Calvet-Bademunt</a> </strong>is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bedrockprinciple.com/p/who-decides-whats-good-for-society?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bedrockprinciple.com/p/who-decides-whats-good-for-society?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item></channel></rss>