Sunset and Renew: Section 230 Should Protect Human Speech, Not Algorithmic Virality
Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, advocate for a “repeal and renew” approach to Section 230 in an effort to reform the current social media ecosystem.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
–Section 230 of the Communications Decency Act
Republicans and Democrats agree the current social media ecosystem serves neither consumers nor citizens. The bipartisan draft legislation to sunset Section 230, proposed by House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-WA) and ranking member Frank Pallone, Jr., (D-NJ) in May 2024, represents a crucial step forward. However, merely repealing Section 230 is insufficient. We must simultaneously update “the 26 words that made the Internet” to better align with First Amendment principles. The Supreme Court’s recent overturning of the Chevron decision, which limits federal agencies’ oversight of public safety, makes this reform even more urgent.
The First Amendment is often misunderstood as permitting unlimited speech. In reality, it has never protected fraud, libel, or incitement to violence. Yet Section 230, in its current form, effectively shields these forms of harmful speech when amplified by algorithmic systems. It serves as both an unprecedented corporate liability shield and a license for technology companies to amplify certain voices while suppressing others. To truly uphold First Amendment freedoms, we must hold accountable the algorithms that drive harmful virality while protecting human expression.
The choice before us is not binary between unchecked viral harassment and heavy-handed censorship. A third path exists: one that curtails viral harassment while preserving the free exchange of ideas. This balanced approach requires careful definition but is achievable, just as we’ve defined limits on viral financial transactions to prevent Ponzi schemes. Current engagement-based optimization amplifies hate and misinformation while discouraging democratic deliberation.
To truly uphold First Amendment freedoms, we must hold accountable the algorithms that drive harmful virality while protecting human expression.
Jaron Lanier, Allison Stanger, Audrey Tang
Our proposed “repeal and renew” approach would remove the liability shield for social media companies’ algorithmic amplification while protecting citizens’ direct speech. This reform distinguishes between fearless speech—which deserves constitutional protection—and reckless speech that causes demonstrable harm. The evidence of such harm is clear: from the documented mental health impacts of engagement-optimized content to the spread of child sexual abuse material (CSAM) through algorithm-driven networks.
The distinction between protected speech and harmful algorithmic amplification becomes clear when comparing social media platforms to institutions like Wikipedia and public libraries. These traditional information sources don’t promote virality as a product. Instead, they allow editors and authors to speak directly, without algorithmic manipulation. Just as someone shouting “fire” in a crowded theater can be held liable for resulting harm, operators of algorithms that incentivize harassment for engagement should face accountability.
Critics like Cory Doctorow raise valid concerns that repealing Section 230 could harm smaller providers and non-profit platforms like Wikipedia. However, these concerns presuppose that the First Amendment cannot protect unmediated online speech in a post-230 world. We believe it can, through carefully crafted legislation that distinguishes between service providers and engagement-optimizing platforms. This distinction is readily definable, as algorithmic curation for engagement is openly advertised and sold as a product.
As William Schuck noted in the Wall Street Journal, “the 26 words that created the Internet” have become “the 26 words that are breaking the First Amendment.” Reform of the current social media ecosystem will not destroy the Internet; rather, it will encourage innovation. Industry leaders already recognize this: Jack Dorsey backs decentralized social networking protocols, while Mark Zuckerberg has launched the federated platform Threads. The Project Liberty initiative, with its People’s Bid for TikTok, exemplifies the potential for a public interest-oriented internet that serves democratic values.
Section 230 today inadvertently circumvents the First Amendment’s guarantees of free speech, assembly, and petition. It enables an ad-driven business model and algorithmic moderation that optimize for engagement at the expense of democratic discourse. Algorithmic amplification is a product, not a public service. By sunsetting Section 230 and implementing new legislation that holds proprietary algorithms accountable for demonstrable harm, we can finally extend First Amendment protections to the digital public square, something long overdue.
Can Higher Education Help Renovate American Democracy?
The Allen Lab for Democracy Renovation hosted a webinar with several panelists to discuss a host of new campus initiatives that offer promising pathways for higher education to reassert its vital role in strengthening democracy by engaging students’ civic learning and supporting their development as civic actors.
Allen Lab for Democracy Renovation Fellow Ami Fields-Meyer lays out research questions for developing a new U.S. tech policy agenda that puts people first.
Allen Lab for Democracy Renovation Fellow Alex Pascal and Vanderbilt Law Professor Ganesh Sitaraman make the case that public options for AI and public utility-style regulation of AI will enhance national security by ensuring innovation and competition, preventing abuses of power and conflicts of interest, and advancing public interest and national security goals.
Allen Lab for Democracy Renovation Fellow Ami Fields-Meyer lays out research questions for developing a new U.S. tech policy agenda that puts people first.
Allen Lab for Democracy Renovation Fellow Alex Pascal and Vanderbilt Law Professor Ganesh Sitaraman make the case that public options for AI and public utility-style regulation of AI will enhance national security by ensuring innovation and competition, preventing abuses of power and conflicts of interest, and advancing public interest and national security goals.
Building a Digital Democracy with Audrey Tang and Megan Smith
The “Building a Digital Democracy” panel brought together Audrey Tang, Megan Smith, Professor Danielle Allen, and Professor Mathias Risse for a conversation on how technology is being used to transform our political institutions.
Register to vote today
Civic participation is central to Harvard’s mission of developing citizen leaders. Sign up for election reminders, and get help with voter registration and early voting — all in one place.