Who Decides What's Good for Society? AI, the DSA, and the Future of Expression in Europe
Why the EU’s efforts to regulate AI may have unintended consequences for free expression.
On June 3, 2025, I participated in the Global Network Initiative & Digital Trust & Safety Partnership EU Rights & Risks Stakeholder Engagement Forum, which brought together civil society and tech companies. What follows is an adapted version of my remarks in the context of a panel on Digital Services Act (DSA) risk assessment developments.
It is no secret that AI is being infused into all types of services. Meta has integrated its generative AI models into Instagram, Facebook, and WhatsApp. Google increasingly relies on AI-generated summaries to answer users' queries. Companies like OpenAI and DeepSeek have also introduced new search engines powered by generative AI.
But what happens if the legal safeguards we erect to control this technology inadvertently create excessive limits on the content and ideas we can create and discover? How we regulate these increasingly significant services matters profoundly for freedom of expression and access to information.
In Europe, AI is regulated by an expanding array of rules. The relationship between these regulations and AI is evident in cases such as the EU AI Act adopted last year. In other instances, there has been extensive discussion about how AI interacts with other major frameworks, such as privacy rules (GDPR) or ongoing debates around copyright rules. However, there is surprisingly little discussion about the intersection between AI and the Digital Services Act (DSA), despite the clear relevance. Nonetheless, there is no doubt that the DSA will significantly govern important AI services.
The European Commission has made clear that AI services can fall under the scope of the DSA. The DSA serves as the EU’s online safety rulebook, imposing obligations related to transparency, data access, and due process. Crucially, it requires very large online platforms (VLOPs) and search engines (VLOSEs) to mitigate societal or "systemic" risks. The AI Act contains similar provisions targeting powerful general-purpose AI models (GPAI). I (and others) have previously highlighted the concerns these obligations pose to freedom of expression.
The Commission expects Meta to submit its risk assessment soon regarding the AI features deployed across its platforms. This involves evaluating how AI integration might impact societal interests such as electoral processes, dissemination of illegal content, or users' well-being, as well as identifying suitable measures to mitigate these risks. Public information also indicates that the Commission is considering whether ChatGPT should be subject to this obligation. Commissioner Virkkunen noted that DeepSeek could also fall under the DSA if integrated into other platforms.
This raises a key question: What do these regulations imply for freedom of expression and access to information? The short answer is that they pose significant dangers.
Given the current political context, it is crucial to emphasize that most Europeans live under strong democratic governance and the rule of law. In most EU countries, and certainly at the EU level, there are robust checks and balances that are not available elsewhere. It would thus be inappropriate to label the DSA or AI Act as censorship tools, although they certainly contain problematic aspects.
As the European Commission begins to apply the DSA—and the AI Act—to AI services, it is crucial to closely scrutinize the problematic aspects, particularly the systemic risk obligations.
On the surface, systemic risk obligations seem beneficial. The DSA requires platforms to tackle significant societal issues, including protecting electoral processes, citizens' well-being, public security, and fundamental rights. The AI Act goes even further, demanding that powerful AI models address, among other issues, potential negative effects on "society as a whole."
The central concern, however, is that this well-intentioned legislation places enormous discretion in the hands of regulators—in this case, the European Commission. Consider this: What exactly is good for "society as a whole"? This varies widely from individual to individual, yet the European Commission is empowered to define it.
We have already witnessed troubling indications. Politico reported that Commissioner Breton, who previously oversaw the DSA, pressured enforcement teams to act against X and expressed concerns about Gaza-related content and Elon Musk’s interview with Donald Trump. Recently, the new Commissioner in charge of the DSA, who has been more cautious, remarked that DeepSeek’s AI model involved "ideological censorship," suggesting potential conflict with EU principles and the DSA.
Many undoubtedly disagree with the views expressed by Trump, Musk, or the Chinese Communist Party. However, the central issue is not agreement or disagreement, but whether we want public authorities to determine which content we may see, prioritize, or suppress.
What becomes of content promoting protests or advocating regional secession? Could that constitute systemic risk, and would it harm "society as a whole," as articulated by the AI Act? France’s shutdown of TikTok in New Caledonia—though outside the DSA’s and the AI Act’s remit—raised serious concerns about the misuse of government powers and the undermining of the rule of law, even in EU democracies.
A statement related to the Gaza conflict could be interpreted as antisemitic or supportive of the Palestinian cause, depending on one's perspective. Similarly, certain remarks might be labeled transphobic or feminist depending on interpretation. Looking forward, if the far-right ascends politically, pro-LGBTQ or abortion-related content may face greater suppression risks.
Having a central authority determining which content we can and cannot generate or access in AI services would be deeply troubling for us as individuals and for the health of our democracies.
As enforcement of the DSA and AI Act intensifies, vigilant scrutiny of the European Commission’s actions is essential. While there is no reason to doubt the Commission's competence, it remains a political body subject to biases like any other.
During the DSA’s adoption, the UN Special Rapporteur on Freedom of Expression warned about the vagueness of systemic risk obligations. Although the European Court of Justice will inevitably oversee the implementation of the DSA and AI Act, judicial oversight takes time, and significant damage could occur in the interim.
As Europe revisits its digital policy framework, narrowing systemic risk obligations merits serious consideration. Meanwhile, freedom of expression advocates must ensure Europe does not inadvertently compromise fundamental rights for an undefined, subjective notion of the greater good.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.