Occasional Paper  

A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism

Links & Downloads

This paper aims to provide a roadmap to AI governance. In contrast to the reigning paradigms, we argue that AI governance should not be merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. Our overarching point is that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage narrow risk but also to construe the risks and opportunities much more broadly, and to make correspondingly large investments in public goods, personnel, and democracy itself. To lay out this vision, we take four steps. First, we define some central concepts in the field, disambiguating between forms of technological harms and risks. Second, we review normative frameworks governing emerging technology that are currently in use around the globe. Third, we outline an alternative normative framework based in power-sharing liberalism. Fourth, we walk through a series of governance tasks that ought to be accomplished by any policy framework guided by our model of power-sharing liberalism. We follow these with proposals for implementation vehicles.

Related Resources

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.

Storytelling Pathways to Civics Engagement

Additional Resource

Storytelling Pathways to Civics Engagement

Watch Roadtrip Nation’s Living Civics documentary and hear from leading universal civic learning experts on the power of narrative for civic engagement.

More on this Issue

AI for Democracy Movements: Toward a New Agenda
A cover photo of the report.

Policy Brief

AI for Democracy Movements: Toward a New Agenda

A new report summarizes key insights from the Nonviolent Action Lab’s December 2025 convening on how artificial intelligence can empower pro-democracy movements.

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.