Occasional Paper  

A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism

Links & Downloads

This paper aims to provide a roadmap to AI governance. In contrast to the reigning paradigms, we argue that AI governance should not be merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. Our overarching point is that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage narrow risk but also to construe the risks and opportunities much more broadly, and to make correspondingly large investments in public goods, personnel, and democracy itself. To lay out this vision, we take four steps. First, we define some central concepts in the field, disambiguating between forms of technological harms and risks. Second, we review normative frameworks governing emerging technology that are currently in use around the globe. Third, we outline an alternative normative framework based in power-sharing liberalism. Fourth, we walk through a series of governance tasks that ought to be accomplished by any policy framework guided by our model of power-sharing liberalism. We follow these with proposals for implementation vehicles.

More from this Program

Weaponized AI: A New Era of Threats and How We Can Counter It

Commentary

Weaponized AI: A New Era of Threats and How We Can Counter It

Allen Lab for Democracy Renovation Fellow Dr. Shlomit Wagman lays out a framework to address the threats artificial intelligence poses to global security and democratic institutions.

More on this Issue