
Commentary
This policy primer from the Allen Lab for Democracy Renovation is intended to introduce the ideas and conversations around reform of Section 230 of the Communications Decency Act and how it could remake the social media ecosystem.
Authors: Allison Stanger, Saddat Nazir, Sarah Hubbard
Section 230 of the Communications Decency Act, dubbed “the 26 words that created the internet”1 has been instrumental in shaping the modern internet by providing broad immunity to online platforms for user-generated content. However, this protection has become increasingly controversial as platforms increasingly face criticism for enabling harmful content proliferation while maintaining minimal accountability.2 In landmark cases, such as Force v. Facebook (2019) and Gonzalez v. Google (2023), platforms avoided liability even when their algorithms actively promoted harmful content from terrorist organizations. A Pew Research study showed that 72% of Americans believe social media companies have too much power, with 71% supporting increased platform accountability3. The emergence of AI-generated content and algorithmic content promotion has further complicated the regulatory landscape. Current debates highlight an ironic twist: a law originally intended to encourage content moderation is now criticized for enabling too little moderation of harmful content.
Previous events, such as Twitter’s decision to suspend President Trump’s account4, have raised questions about the concentration of power in the hands of platform executives to shape public discourse. Mounting concerns about platforms’ role in spreading harmful content like hate speech, revenge porn5, and disinformation have been the backdrop for a growing bipartisan push for reform. The law’s broad interpretation by courts has also raised concerns about whether it provides excessive protection to platforms that recommend content through algorithmic systems. The current debate centers on what experts call the “repeal and replace” paradigm – while there is broad agreement that the existing framework is inadequate, stakeholders disagree on how to balance free expression with accountability. The courts’ broad interpretation of Section 230, particularly regarding algorithmic recommendation systems, has raised fundamental questions about whether platforms should maintain immunity when their systems actively amplify harmful content. This discussion has become increasingly urgent as AI capabilities expand and content targeting becomes more sophisticated.
Sunset Section 230 to remove the liability shield for social media companies’ algorithmic amplification while protecting citizens’ direct speech. Implement new legislation that targets recommendation systems rather than user speech and holds proprietary algorithms accountable for demonstrable harm.
Encourage innovation by mandating interoperability standards and data portability for social media platforms, which would allow users to transfer their data and social connections between services. This both gives users more control over how their information is being used and encourages greater competition in digital markets.
Invest in public interest oriented digital infrastructure that provide essential services without the profit-maximizing incentives that can lead to harmful algorithmic amplification.
Commentary
Commentary
Commentary
Commentary
Non-resident Senior Fellow, Allen Lab for Democracy Renovation;
Co-Director and Co-Investigator, GETTING-Plurality Research Network