Policy Brief  

Plural Publics

The authors highlight why we believe the problem of “plural publics” to be a core challenge of data governance, discuss existing tools that can help achieve it and a research agenda to further develop and integrate these tools.

By:

  • Divya Siddarth
  • Glen Weyl
  • Shrey Jain

Download the PDF

Plral Publics in white text on Teal background

Data governance is usually conceptualized in terms of “privacy” v. “publicity”. Yet a core feature of pluralistic societies is association, groups that share with each other, privately. These are a diversity of “publics”, each externally private but with the ability to coordinate and share internally. Empowering them requires tools that allow the establishment of shared communicative contexts and their defense against external sharing outside of context. The ease of spreading information online has challenged such “contextual integrity” and the rise of generative foundation models like GPT-4 may radically exacerbate this challenge. In the face of this challenge, we highlight why we believe the problem of “plural publics” to be a core challenge of data governance, discuss existing tools that can help achieve it and a research agenda to further develop and integrate these tools.

Related Resources

Voter Experience Summit Recap

Commentary

Voter Experience Summit Recap

Allen Lab Fellow Hillary Lehr convened a Voter Experience Summit at Harvard’s Ash Center in March, bringing together 25 cross-sector experts to rigorously map the voter journey. This essay explores how that collaborative process could lay the groundwork for new interventions to understand and improve the experience of voting for all.

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.

More on this Issue

AI for Democracy Movements: Toward a New Agenda
A cover photo of the report.

Policy Brief

AI for Democracy Movements: Toward a New Agenda

A new report summarizes key insights from the Nonviolent Action Lab’s December 2025 convening on how artificial intelligence can empower pro-democracy movements.

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.