Occasional Paper  

A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism

Links & Downloads

This paper aims to provide a roadmap to AI governance. In contrast to the reigning paradigms, we argue that AI governance should not be merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. Our overarching point is that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage narrow risk but also to construe the risks and opportunities much more broadly, and to make correspondingly large investments in public goods, personnel, and democracy itself. To lay out this vision, we take four steps. First, we define some central concepts in the field, disambiguating between forms of technological harms and risks. Second, we review normative frameworks governing emerging technology that are currently in use around the globe. Third, we outline an alternative normative framework based in power-sharing liberalism. Fourth, we walk through a series of governance tasks that ought to be accomplished by any policy framework guided by our model of power-sharing liberalism. We follow these with proposals for implementation vehicles.

More from this Program

Transparency is Insufficient: Lessons From Civic Technology for Anticorruption

Commentary

Transparency is Insufficient: Lessons From Civic Technology for Anticorruption

Allen Lab Researcher David Riveros Garcia draws on his experience building civic technology to fight corruption in Paraguay to make the case that effective civic technology must include power and collective action in its design.

Allen Lab Fellow Spotlight: The Case for Building an AmeriCorps Alumni Leadership Network

Additional Resource

Allen Lab Fellow Spotlight: The Case for Building an AmeriCorps Alumni Leadership Network

In a new essay, The Case for Building an AmeriCorps Alumni Leadership Network, Allen Lab Policy Fellow Sonali Nijhawan argues that the 1.4 million Americans who have completed national service represent an underleveraged civic asset. Drawing on her experience as former Director of AmeriCorps, Nijhawan outlines a roadmap for transforming dispersed alumni into a connected leadership network capable of reinvigorating public service, rebuilding trust in government, and strengthening civic participation.

The Ecosystem of Deliberative Technologies for Public Input

Additional Resource

The Ecosystem of Deliberative Technologies for Public Input

Ensuring public opinion and policy preferences are reflected in policy outcomes is essential to a functional democracy. A growing ecosystem of deliberative technologies aims to improve the input-to-action loop between people and their governments.

More on this Issue

The Ecosystem of Deliberative Technologies for Public Input

Additional Resource

The Ecosystem of Deliberative Technologies for Public Input

Ensuring public opinion and policy preferences are reflected in policy outcomes is essential to a functional democracy. A growing ecosystem of deliberative technologies aims to improve the input-to-action loop between people and their governments.

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.