Sunset and Renew: Section 230 Should Protect Human Speech, Not Algorithmic Virality
Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, advocate for a “repeal and renew” approach to Section 230 in an effort to reform the current social media ecosystem.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
–Section 230 of the Communications Decency Act
Republicans and Democrats agree the current social media ecosystem serves neither consumers nor citizens. The bipartisan draft legislation to sunset Section 230, proposed by House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-WA) and ranking member Frank Pallone, Jr., (D-NJ) in May 2024, represents a crucial step forward. However, merely repealing Section 230 is insufficient. We must simultaneously update “the 26 words that made the Internet” to better align with First Amendment principles. The Supreme Court’s recent overturning of the Chevron decision, which limits federal agencies’ oversight of public safety, makes this reform even more urgent.
The First Amendment is often misunderstood as permitting unlimited speech. In reality, it has never protected fraud, libel, or incitement to violence. Yet Section 230, in its current form, effectively shields these forms of harmful speech when amplified by algorithmic systems. It serves as both an unprecedented corporate liability shield and a license for technology companies to amplify certain voices while suppressing others. To truly uphold First Amendment freedoms, we must hold accountable the algorithms that drive harmful virality while protecting human expression.
The choice before us is not binary between unchecked viral harassment and heavy-handed censorship. A third path exists: one that curtails viral harassment while preserving the free exchange of ideas. This balanced approach requires careful definition but is achievable, just as we’ve defined limits on viral financial transactions to prevent Ponzi schemes. Current engagement-based optimization amplifies hate and misinformation while discouraging constructive dialogue.
To truly uphold First Amendment freedoms, we must hold accountable the algorithms that drive harmful virality while protecting human expression.
Jaron Lanier, Allison Stanger, Audrey Tang
Our proposed “repeal and renew” approach would remove the liability shield for social media companies’ algorithmic amplification while protecting citizens’ direct speech. This reform distinguishes between fearless speech—which deserves constitutional protection—and reckless speech that causes demonstrable harm. The evidence of such harm is clear: from the documented mental health impacts of engagement-optimized content to the spread of child sexual abuse material (CSAM) through algorithm-driven networks.
The distinction between protected speech and harmful algorithmic amplification becomes clear when comparing social media platforms to institutions like Wikipedia and public libraries. These traditional information sources don’t promote virality as a product. Instead, they allow editors and authors to speak directly, without algorithmic manipulation. Just as someone shouting “fire” in a crowded theater can be held liable for resulting harm, operators of algorithms that incentivize harassment for engagement should face accountability.
Critics like Cory Doctorow raise valid concerns that repealing Section 230 could harm smaller providers and non-profit platforms like Wikipedia. However, these concerns presuppose that the First Amendment cannot protect unmediated online speech in a post-230 world. We believe it can, through carefully crafted legislation that distinguishes between service providers and engagement-optimizing platforms. This distinction is readily definable, as algorithmic curation for engagement is openly advertised and sold as a product.
As William Schuck noted in the Wall Street Journal, “the 26 words that created the Internet” have become “the 26 words that are breaking the First Amendment.” Reform of the current social media ecosystem will not destroy the Internet; rather, it will encourage innovation. Industry leaders already recognize this: Jack Dorsey backs decentralized social networking protocols, while Mark Zuckerberg has launched the federated platform Threads. The Project Liberty initiative, with its People’s Bid for TikTok, exemplifies the potential for a public interest-oriented internet that serves human flourishing.
Section 230 today inadvertently circumvents the First Amendment’s guarantees of free speech, assembly, and petition. It enables an ad-driven business model and algorithmic moderation that optimize for engagement at the expense of democratic discourse. Algorithmic amplification is a product, not a public service. By sunsetting Section 230 and implementing new legislation that holds proprietary algorithms accountable for demonstrable harm, we can finally extend First Amendment protections to the digital public square, something long overdue.
Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.
As a part of the Allen Lab’s Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.
As a part of the Allen Lab’s Political Economy of AI Essay Collection, Emily S Lin and Marshall Ganz call on us to reckon with how humans create, exercise, and structure power, in hopes of meeting our current technological moment in a way that aligns with our values.
Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.
As a part of the Allen Lab’s Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.
As a part of the Allen Lab’s Political Economy of AI Essay Collection, Emily S Lin and Marshall Ganz call on us to reckon with how humans create, exercise, and structure power, in hopes of meeting our current technological moment in a way that aligns with our values.
Register to vote today
Civic participation is central to Harvard’s mission of developing citizen leaders. Sign up for election reminders, and get help with voter registration and early voting — all in one place.