Why You Should Care about Algorithms, Free Speech, and Section 230
"I built this algo brick by brick" is more than just a comedic online expression.
Imagine walking into the world’s largest library. You’re surrounded by millions of books, but there is no catalog, no librarian, no sections. Every book is randomly stacked, with no indication of subject or relevance. You’re left to wander and will probably never find what you’re looking for. That’s the Internet without algorithmic curation. And it’s the future we’re hurtling toward if legal protections like Section 230 and First Amendment editorial rights are stripped away.
Today’s Internet only works because algorithms help us find the book in the endless stacks. Platforms sort, filter, and recommend content based on user behavior and content policies. They help enforce safety rules at scale, personalize feeds so users find what matters to them, and create order in the chaos. Just as a wise librarian guides a teenager to age-appropriate materials, algorithms help steer users toward content that suits them. Our research associate Isabelle Anzabi has put together a short and helpful explainer on how these recommendation systems work.
Unfortunately, a growing chorus of voices is arguing that the legal foundations that built the Internet as we know it need to change. They contend that platforms are either moderating too much or too little content, and that the algorithms used to perform this function should be subject to government scrutiny. These arguments often stem from a misunderstanding about how our current legal framework has shaped the modern Internet and what would happen if we were to abandon it.
Courts have repeatedly affirmed that editorial rights apply to decisions about what not to say, just as much as what is said. Whether it’s a newspaper refusing to print something the government requires them to, a parade choosing who can be a part of it, or a utility company declining to include outside messages in its mailings, the principle is the same: private entities cannot be compelled to carry speech they do not endorse. This principle extends to social media platforms and their algorithmic systems. When a platform chooses to promote one post over another, or enforce community standards, it is making a choice about speech, and that choice is constitutionally protected.
Section 230 reinforces this by shielding platforms from liability for third-party content, whether they host, organize, or moderate it. Without this protection, platforms would face lawsuits every time they tried to enforce rules or highlight content. Before Section 230 existed, courts penalized platforms that attempted to moderate content. This framework created an incentive for platforms to moderate everything or do nothing at all, allowing content that would be potentially sensitive to anyone in the country. Section 230 reversed that, empowering platforms to act responsibly without fear that every decision could end in court.
But a recent decision from the Third Circuit threatens to unravel this framework. In TikTok v. Anderson, the court held that when an algorithm promotes harmful content, it becomes the platform’s own speech, thereby stripping away Section 230 immunity. If Section 230 protections or First Amendment editorial rights are rolled back, the consequences could be profound.
Some platforms will over-moderate to avoid legal exposure, removing lawful but controversial content. Others will under-moderate, allowing all types of content, including things most of us wouldn’t want to see, to spread unchecked. Such a shift will not harm the powerful but the vulnerable, the dissenters, and the voices that depend on intermediaries to be heard. Smaller platforms and start-ups may shut down or avoid hosting speech and change their business models altogether due to litigation risk.
In a new policy paper for The Future of Free Speech, I argue that algorithmic curation is not just a technical necessity; it is a form of editorial discretion protected by the First Amendment that is essential to the functioning of the digital ecosystem. Section 230, in turn, provides the statutory safety net that makes such moderation feasible without constant legal peril.
Without algorithms, we would be forever lost in the library stacks, unable to find and consume the content that matters most to us. If we care about free speech online, we should fight to preserve the legal protections that make algorithmic curation possible.
Ashkhen Kazaryan is a Senior Legal Fellow at The Future of Free Speech, where she leads initiatives to protect free expression and shape policies that uphold the First Amendment in the digital age.




> the court held that when an algorithm promotes harmful content, it becomes the platform’s own speech
The consequences outlined in the paragraph after this would indeed be bad, but they don't logically follow from the decision as you describe it. Just don't have some black-box video recommendation "algorithm" - that's how the Web used to be, and you keep your Section 230 protection.