Liability Laundering by Algorithm
With California's SB 771, lawmakers attempt to find a new, constitutionally dubious backdoor to regulate social media content moderation.
California lawmakers are once again testing the constitutional limits of internet regulation, and SB 771 is their latest attempt to sidestep Section 230 and the First Amendment. On its face, the bill claims to impose liability on large social media companies only when they materially contribute to violations of longstanding civil rights laws, such as the Ralph and Bane Acts.
In reality, SB 771 seeks to punish platforms for the content they algorithmically recommend, even if that content is lawful, protected speech, by labeling the recommendation itself as “conduct” rather than “speech.” That semantic shift doesn’t cure the constitutional problem; it magnifies it.
SB 771 tries to argue that when a platform’s algorithm amplifies a third party’s violent or harassing post, the platform is no longer just a conduit but a participant. That theory flies in the face of Section 230’s core promise: that platforms are not liable for third-party content they didn’t create, even if they amplify it.
Section 230(c)(1) makes it crystal clear that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another.” Courts across the country, including the Ninth Circuit, have interpreted this immunity broadly, and for good reason. Stripping it away just because the content appeared in a user’s feed thanks to a ranking algorithm ignores how the modern Internet works. Algorithms are not optional features but the structure of the digital world. To call that structure unlawful conduct every time someone posts something harmful is to demand omniscient pre-censorship on a massive scale.
The bill’s backers cite the Third Circuit’s recent decision in Anderson v. TikTok to argue that algorithmic curation can fall outside of Section 230. But that case isn’t binding in California. In fact, it clashes sharply with Ninth Circuit precedent, particularly Dyroff v. Ultimate Software and Gonzalez v. Google.
In Dyroff, the court held that algorithmic recommendations, even those that allegedly pushed users toward dangerous content, were protected under Section 230 because they were merely tools for publishing third-party speech. SB 771 essentially invites plaintiffs to relitigate that question under a thin veneer of civil rights enforcement. But this amounts to jurisdictional forum shopping disguised as state-level accountability.
Even if the bill managed to dodge Section 230 preemption, it still runs headfirst into the First Amendment. The Supreme Court has repeatedly affirmed that editorial discretion is itself a protected form of speech. As Moody v. NetChoice reaffirmed, platforms exercise expressive judgment when they choose what content to surface, bury, or remove. SB 771 punishes that judgment by tying massive civil penalties to algorithmic outputs that correlate with alleged statutory violations. But algorithms just operationalize editorial judgment at scale.
The legislative analysis tries to reframe this as a modest effort to prevent truly egregious harms, violence, intimidation, and harassment. But the liability hook is still user speech. And the platform’s so-called misconduct is still choosing to display that speech in a feed. No matter how much the drafters insist this is about conduct, not content, the First Amendment doesn’t allow that kind of end-run. Calling curation “product design” doesn’t change the constitutional character of what’s being punished.
Worse, SB 771 would chill lawful speech far beyond the limited category of threats or harassment it targets. Platforms, faced with the risk of uncapped civil penalties for unknowingly recommending third-party posts that might be construed as violations of state civil rights laws, will overcorrect. We have seen this play out over and over again abroad. They’ll suppress content, delete controversial users, and flatten political and cultural discourse to a dull hum. The law doesn’t require them to, but the litigation and liability risk all but guarantee it.
This bill is not about safety. Instead, it represents another effort by a state government to circumvent platforms' constitutional protections in order to implement state-controlled content moderation. And it sets a dangerous precedent: that a state can impose crushing liability on private actors for failing to sufficiently sanitize their public forums (in a manner that a specific state wants them to), even when the offending speech is constitutionally protected.
That idea is un-American, unconstitutional, and technologically incoherent. It deserves to be rejected not just because it conflicts with Section 230, but because it betrays the First Amendment’s most basic promise: that the government cannot decide what speech is good and what speech is bad.
Ashkhen Kazaryan is a Senior Legal Fellow at The Future of Free Speech, where she leads initiatives to protect free expression and shape policies that uphold the First Amendment in the digital age.