Why It’s So Easy to Talk Past Each Other about Platform Moderation and Online Speech
The TL;DR: Platforms have a legal right to moderate content; that doesn’t mean we shouldn’t criticize them when they go too far.
Last week, The Intercept reported that Meta would be removing posts from its platforms containing the word “antifa” (short for anti-fascist) if the word appeared alongside what the company deems a “content-level threat signal.”
That bucket includes any “visual depiction of a weapon,” “reference to arson, theft, or vandalism,” or “military language” used near the term. According to the policy, use of “antifa” could also trigger account bans or hidden comments if it appears in “references to historical or recent incidents of violence,” including “historic wars” and “battles.” In fairness, a Meta spokesperson pointed to a recent transparency report noting the company will also remove QAnon content when it appears alongside “content-level threat signals.”
Still, if the reporting is accurate, the policy raises an obvious worry: moderation this broad could impact historians, journalists, educators, and activists who use the word in perfectly legitimate ways. In other words, it looks like a policy focused on flagging a particular viewpoint, and when concepts or ideas are the target of overly restrictive moderation policies, they can stifle online discussions.
Here is where debates over online speech tend to get stuck. Meta has a First Amendment right to write and enforce this policy. We will say that plainly. We will also say that the policy could lead to outcomes that are bad for the practical exercise of free speech online.
Holding both positions at the same time is hard work, and it is where free speech organizations like ours are most often accused of being hypocritical or unprincipled.
Two Different Questions
The confusion usually starts with a conflation. “Can the government stop Meta from taking this content down?” and “Should Meta be taking this content down?” are different questions, with different answers, and they are governed by different tools.
The First Amendment restrains the government, not private companies. Your rights to speak anonymously, say things that offend others, criticize religion, boycott, and counter speech you disagree with apply online exactly as they do offline.
Government attempts to penalize that speech run into the same constitutional problem online that they would at a pamphlet on a street corner. At the same time, the Supreme Court has held that platform content moderation decisions are also generally protected by the First Amendment
In Moody v. NetChoice and Paxton v. NetChoice (2024), my colleague Ashkhen Kazaryan explains, “the Supreme Court affirmed that platforms like Facebook and YouTube are protected by the First Amendment when they make editorial decisions about user-generated content, including whether to publish, remove, promote, or demote that content through algorithmic systems.”
“This kind of curating, editing, and prioritizing,” she adds, “is part of the freedom to speak.”
To add to this framework, the oft misunderstood Section 230 of the Communications Decency Act prevents platforms from being treated as the publisher of third-party content and shields their good-faith moderation efforts from liability.
Critics on the Right have argued that repealing or narrowing 230 would reduce censorship of conservative views. Critics on the Left assume that removing 230’s liability shield would push platforms to clean up harmful content. Both have the incentives backward.
The pre-230 regime punished platforms for trying to moderate at all; without Section 230, platforms would be forced to choose between leaving almost everything up or over-removing anything remotely controversial to minimize risk. Neither outcome is good for online speech.
So when a platform moderates in a way we disagree with — such as broad attempts to ban particular words or phrases, for instance — we are not going to argue that the government should make it stop. The same First Amendment that protects the user’s post protects the platform’s decision about whether to host it.
Culture Matters, Too
Which brings us to the part of this debate that actually matters and gets too little airtime in most coverage: what platforms should do, even when they have every legal right to do otherwise.
Our view is straightforward. Platforms’ moderation decisions are legally protected, and Section 230 has been essential to the open internet. At the same time, we press platforms to design moderation policies that reflect a strong culture of free speech.
Overly restrictive policies, especially those aimed at policing unpopular or offensive viewpoints, tend to backfire. They can build resentment, create a martyr complex out of users who are restricted or banned, push extremist content toward darker corners of the web, and often silence the very minority voices that hate-speech rules are supposed to protect.
Our 2023 Scope Creep report spelled out what this looks like in practice. We analyzed the hate-speech policies of eight major platforms and found that the scope of moderated content had expanded significantly since the early 2000s, from prohibiting the promotion of hatred or racist speech to banning content containing harmful stereotypes, conspiracy theories, and curses targeting protected groups.
And we chronicled who was actually impacted by such policies: a Nigerian-American writer suspended for posting screenshots of racist messages and death threats; a journalism professor whose account was disabled after she posted a critique of tropes about Black-on-Black crime; a Jewish woman whose posts criticizing platforms’ policing of antisemitic content were removed; a history teacher whose YouTube channel was taken down for educational videos about Nazi propaganda.
It’s true that content moderation has changed a lot across platforms since 2023, but the broader lesson is that it is very difficult to create content policies and enforcement that are specific enough to target things like hate speech without flagging content that may look harmful on the surface. Policies are usually developed with good intentions, but when it comes to practical enforcement — especially when companies use AI automation and algorithms — they often overlook important elements like context, satire, and the reclamation of slurs by minority groups.
This is why we engaged actively with the Meta Oversight Board on specific moderation decisions. In a formal submission, for example, we argued that Meta should refrain from banning “from the river to the sea” in the wake of the Israel-Palestine conflict after October 7 because it could stifle legitimate debate, among other reasons outlined above.
It’s also why we have been strong proponents of decentralized and crowd-sourced moderation models that empower users and don’t rely on top-down moderation decisions.
As Jacob Mchangama and his co-authors argue in their work on “prosocial media,” platforms should move away from opaque, centralized control and instead enable users to provide context, surface shared understanding across communities, and collaboratively evaluate content through mechanisms like community annotations and consensus-driven fact-checking. By making the “social provenance” of information visible—showing which communities agree or disagree with it—platforms can reduce polarization and build trust without resorting to censorship.
Similarly, we released a decentralized content moderation prototype that combines AI-assisted tools with human, community-based judgment, allowing different groups to set their own moderation thresholds. We believe decentralized models can better reflect diverse norms, improve accuracy, and restore user agency—while still maintaining accountability. These approaches point toward a more transparent, pluralistic, and speech-protective model of online governance.
Holding Both Positions
How do we use this framework to analyze Meta’s “antifa” policy?
This understanding tells us not to call for legislation that would force Meta to change course. It also tells us not to ask the government to strip platforms of their editorial discretion or to roll back Section 230 as punishment for moderation we dislike.
Government officials granted that kind of power over online speech will use it, and the track record in Europe, where legal content is routinely over-removed in response to sweeping regulations, should be a warning. And if you are tempted to hand that power to the officials currently in charge, it is worth asking what their opponents will do with the same power the next time the balance shifts.
It also tells us not to shrug and move on. The policy, as reported, looks overbroad. If our analysis shows that it might have unintended consequences for online speech, we will push back. As free speech advocates, we can call for better moderation tools and practices without enlisting the state.
When people accuse free speech organizations of hypocrisy for defending platforms’ legal right to moderate, they are usually reading us as endorsing specific moderation choices. But that would be the same as assuming we support the content of those who engage in hateful or offensive speech when we speak out against attempts to legally ban hate speech.
What we actually defend is the structure that keeps those choices in private hands, out of reach of whichever government happens to be in office. The cultural fight over how platforms should moderate — over what a free-speech-friendly internet should look like in practice — is also a fight worth having, and the one our work is built to advance.
Justin Hayes is the Director of Communications at The Future of Free Speech and the Managing Editor of The Bedrock Principle.





