Why Free Expression Must Anchor Global AI Governance
In our submission to the UN's Global Dialogue on AI Governance, we argue that freedom of expression must be at the heart of global AI governance, not an afterthought.
The Global Dialogue on AI Governance is the United Nations’ first universal forum on AI, bringing together member states, civil society, and other stakeholders in Geneva this July to shape the trajectory of international AI governance and regulation.
We’ve submitted input to the Global Dialogue’s open call for comments to ensure that freedom of expression and access to information are at the center of multilateral AI governance conversations.
Across governance instruments produced since 2018, speech protections tend to be applied with general terms that fall short of the legality, legitimacy, and necessity requirements set out in Article 19 of the International Covenant on Civil and Political Rights.
Our submission calls on the Dialogue to apply these rigorous principles and standards in its regulatory framework, building on the commitments member states have already made through the Global Digital Compact.
You can read our full submission below:
Submission to the Global Dialogue on AI Governance
1. In your opinion, what outcomes would make the first Global Dialogue on AI Governance a success?
The Global Dialogue on AI Governance offers a unique opportunity to shape how the world governs AI, owing to its universality and its anchoring within the broader UN system, including commitments member states have already made through the Global Digital Compact and related instruments.
Human rights—particularly freedom of expression and access to information—must be a cornerstone of the Dialogue and of any multilateral governance instruments that are eventually adopted.
Yet, freedom of expression and access to information are routinely subsumed under information integrity objectives without the counterweight of Article 19 of the International Covenant on Civil and Political Rights . Most instruments invoke human rights without naming freedom of expression or highlighting the importance of the necessity and proportionality requirements under Article 19(3).
The Dialogue’s human rights thematic cluster should engage Article 19 directly, treating the legality, necessity, and proportionality triad as the standard for adopting and evaluating AI content governance measures rather than as aspirational language. The Panel’s annual assessment should encompass how AI-enabled surveillance, AI-assisted political communication, and information integrity governance measures affect freedom of expression and access to information. The Co-Chairs’ summary should document the divergence between existing instruments on freedom of expression standards and identify it as a question requiring resolution in subsequent processes.
The Global Digital Compact should commit member states to protecting freedom of expression and access to information in AI risk mitigation and to refraining from information restrictions inconsistent with international human rights law. UNGA Resolution A/RES/76/227 explicitly requires that efforts to counter disinformation promote and protect rather than violate freedom of expression. The UNESCO Recommendation on the Ethics of AI requires that limitations on freedom of expression be lawful, necessary, and proportionate. The Dialogue should continue to apply freedom of expression and access to information standards in the AI governance context.
2. Which thematic areas reflect your priorities for urgent action? (Select up to 4)
Protection and promotion of human rights; Transparency, accountability, and human oversight; Social, economic, ethical, cultural, linguistic, and technical implications of AI; and Safe, secure, and trustworthy AI.
3. Please briefly explain your selection.
Protection and promotion of human rights is the primary selection because the governance gap is structural and documented. Most major multilateral AI instruments invoke human rights without naming freedom of expression or explicitly endorsing Article 19’s necessity and proportionality requirements. The Dialogue’s human rights thematic cluster and the Co-Chairs’ summary are the mechanisms through which this record can be documented and addressed.
Transparency, accountability, and human oversight matter for freedom of expression and access to information because AI-enabled content moderation operating without appropriate safeguards can lead to unjustified restrictions—often falling hardest on minority voices. Those safeguards include human review, meaningful avenues for appeal, and evaluations that account for over-moderation, not only under-moderation. The Scientific Panel is well placed to assess whether existing transparency and oversight frameworks are designed to protect expression or merely to enforce compliance with undefined information integrity standards.
Social, economic, ethical, cultural, and technical implications encompass the chilling effect: AI-enabled mass surveillance deters people from speaking, organizing, and dissenting. The Human Rights Committee and the UN Special Rapporteur on freedom of expression have established that the chilling effect is itself a harm to freedom of expression independent of formal sanction, and AI removes the scale constraints that previously limited such operations. The Scientific Panel’s evidence-gathering mandate should specifically encompass how mass surveillance enabled by AI affects freedom of expression at population scale.
Safe, secure, and trustworthy AI is particularly relevant to information integrity. Engaging this theme from a freedom of expression perspective within the Dialogue’s thematic discussions is essential. Doing so helps ensure that AI safety objectives are not designed in ways that treat restrictions on expression as a default compliance strategy, and that the Co-Chairs’ summary records freedom of expression as a structural limit on safety measures rather than a competing objective.
4. Are there any cross-cutting or emerging issues not captured by the listed themes?
The most significant cross-cutting issue not captured by the listed themes is the structural relationship between information integrity as a governance objective and freedom of expression as a legal constraint. This relationship determines the practical effect of AI governance frameworks on expressive freedom, yet it has not been named as a structural problem in any major multilateral instrument.
Information integrity appears across AI governance instruments as a primary or co-equal objective to freedom of expression and is often structurally embedded within governance instruments. Where information integrity measures are explicitly conditioned on respect for freedom of expression, as in the 2024 OECD Recommendation and select provisions of the Global Digital Compact, the right functions as a substantive limit. Where information integrity is framed as an independent or primary objective without that counterweight, the legal discipline of Article 19 recedes. Categories such as ‘inaccurate information,’ ‘harmful content,’ and ‘content that undermines social stability,’ left undefined and unmoored from proportionality requirements, have a documented history of being applied to journalism, political dissent, and civil society activity in both democratic and authoritarian contexts.
Notably, the Global Digital Compact‘s information integrity provisions (paragraphs 33 and 34) frame countering disinformation as a cooperative objective without explicitly subjecting it to the necessity and proportionality standard, even though paragraph 23(d) separately commits states to refraining from information restrictions inconsistent with international law. UNGA Resolution A/RES/76/227, adopted in December 2021, set a stronger precedent by explicitly requiring that disinformation responses ‘promote and protect and do not violate individuals’ freedom of expression.’ The Dialogue’s thematic discussion on safe, secure, and trustworthy AI, and its Co-Chairs’ summary, should clarify that the GDC’s information integrity commitments are to be read in light of this requirement.
5. How are the governance gaps in your selected thematic areas affecting your country, region, or sector?
The Future of Free Speech (FoFS) works primarily on freedom of expression in the United States and the European Union. Both are democracies with relatively strong protections for free expression, yet both face challenges. We examine these challenges in our recent report, That Violates My Policies: AI Laws, Chatbots, and the Future of Expression, which assesses AI and freedom of expression across six jurisdictions (the United States, the European Union, Brazil, China, India, and the Republic of Korea) in collaboration with leading local experts.
In the European Union, the systemic risk assessment requirements under the Digital Services Act and the AI Act are a particular cause for concern. They rest on vague provisions that are susceptible to misuse and orient compliance around risks rather than rights.
In the United States, policy focuses on “truth-seeking” and “neutral” AI risks producing undue restrictions on free speech and access to information, in part because the underlying concepts are so loosely defined. Deepfake prohibitions on political content in several states are a further source of concern.
Together, these provisions reflect a broader multilateral gap: when information integrity is foregrounded as an objective without the counterweight of legality, necessity, and proportionality, governance frameworks generate compliance pressure toward restricting contested but lawful speech.
The international AI governance corpus does not provide the normative foundation a rights-protective framework requires. When the multilateral instruments that inform national policymaking treat information integrity as a primary objective without grounding it in Article 19’s operative standards, that gap propagates into domestic legislative design. The Dialogue, as the universal forum connecting the GDC’s commitments to AI governance practice, is the appropriate venue in which to establish that legality, necessity, and proportionality are not optional standards in AI content governance but the applicable international human rights law framework.
6. What role can the AI Dialogue play in advancing international cooperation on AI governance?
The populations in the Global Majority are not represented in the processes that have produced some of the dominant normative instruments, such as the OECD AI Principles and the EU AI Act. The Dialogue’s universal membership creates the conditions under which this asymmetry can be corrected, provided the human rights cluster engages substantively rather than procedurally.
Member states arrived at the Dialogue having already committed, through the Global Digital Compact, to protect freedom of expression in risk mitigation measures and to refrain from information restrictions inconsistent with international law. These commitments provide a shared foundation for discussion that should not need to be relitigated. The Dialogue’s value is in clarifying how those commitments apply to AI governance specifically: to content moderation systems, to AI-enabled surveillance, and to the information integrity frameworks that states are building into national AI legislation. UNGA Resolution A/RES/76/227 of December 2021 further established that disinformation responses must not violate freedom of expression. The Dialogue can treat this as settled UN-system doctrine and ask how it applies to AI-mediated content decisions.
The Dialogue can also commission the Scientific Panel to provide evidence on how AI-enabled surveillance and content governance affect freedom of expression across different governance contexts, building the evidentiary foundation that subsequent instruments will require. The Special Rapporteur on freedom of expression’s 2021 report on disinformation (A/HRC/47/25) found that state and company responses to disinformation have been ‘problematic, inadequate and detrimental to human rights’ and called for responses grounded in the international human rights framework. That finding remains directly applicable to AI governance and should inform the Panel’s mandate.
7. What existing initiatives, partnerships, or mechanisms should the Dialogue build upon or connect with?
The Global Digital Compact (2024) provides the most recent and authoritative UN-level foundation. Paragraph 30 commits states to ‘robust risk mitigation and redress measures that also protect privacy and freedom of expression.’ Paragraph 31(c) commits cooperation to ‘protect privacy, freedom of expression and access to information while addressing harms.’ Paragraph 23(d) commits states to refrain from ‘imposing restrictions on the free flow of information and ideas that are inconsistent with obligations under international law.’ Taken together, these provisions establish that member states accept freedom of expression as a structural limit on AI governance measures, not merely as a value to be acknowledged.
UNGA Resolution A/RES/76/227 (December 2021) on countering disinformation for the promotion and protection of human rights and fundamental freedoms explicitly required that responses to disinformation ‘promote and protect and do not violate individuals’ freedom of expression.’ This resolution predates most AI governance instruments and establishes the applicable UN-system standard. The Dialogue should treat it as part of the normative foundation on which its information integrity discussions are built.
The Special Rapporteur on freedom of expression’s 2021 report on disinformation (A/HRC/47/25), which found that state and company responses have been problematic and detrimental to human rights and called for multidimensional responses grounded in the international human rights framework.
The UNESCO Recommendation on the Ethics of AI (2021) is the most developed member state-endorsed instrument on freedom of expression and AI. It invokes the proportionality test in its human rights sense, prohibits AI systems from being used for social scoring or mass surveillance, and requires states to ensure AI actors respect and promote freedom of expression in automated content moderation.
The 2024 revision of the OECD Recommendation on Artificial Intelligence introduced the formulation that harm mitigation must respect freedom of expression, conditioning information integrity measures on compliance with this right.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.



