This paper aims to provide a roadmap to AI governance. In contrast to the reigning paradigms, we argue that AI governance should not be merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. Our overarching point is that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage narrow risk but also to construe the risks and opportunities much more broadly, and to make correspondingly large investments in public goods, personnel, and democracy itself. To lay out this vision, we take four steps. First, we define some central concepts in the field, disambiguating between forms of technological harms and risks. Second, we review normative frameworks governing emerging technology that are currently in use around the globe. Third, we outline an alternative normative framework based in power-sharing liberalism. Fourth, we walk through a series of governance tasks that ought to be accomplished by any policy framework guided by our model of power-sharing liberalism. We follow these with proposals for implementation vehicles.
Q & A: Crocodile tears, Can the ethical-moral intelligence of AI models be trusted?
As artificial intelligence becomes more embedded in everyday decision-making, its role in shaping how people think about ethics and morality is drawing increasing scrutiny. In this conversation with researcher Sarah Hubbard, we discuss insights from her co-authored paper, “Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?”—examining how AI systems respond to moral dilemmas, and what this reveals about the risks, limitations, and need for greater transparency and human oversight in AI-driven ethical guidance.
Bootstrap Blackness: Black Men, Conservatism, and Party Politics
A new research article by Dr. Christine Slaughter, Research Fellow at the Allen Lab for Democracy Renovation and co-authors examines the narrative of black men’s political “shift right”. The study finds Black men remain overwhelmingly Democratic, despite growing public attention to ideological divides.
Allen Lab Fellow Hillary Lehr convened a Voter Experience Summit at Harvard’s Ash Center in March, bringing together 25 cross-sector experts to rigorously map the voter journey. This essay explores how that collaborative process could lay the groundwork for new interventions to understand and improve the experience of voting for all.
Q & A: Crocodile tears, Can the ethical-moral intelligence of AI models be trusted?
As artificial intelligence becomes more embedded in everyday decision-making, its role in shaping how people think about ethics and morality is drawing increasing scrutiny. In this conversation with researcher Sarah Hubbard, we discuss insights from her co-authored paper, “Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?”—examining how AI systems respond to moral dilemmas, and what this reveals about the risks, limitations, and need for greater transparency and human oversight in AI-driven ethical guidance.
A new report summarizes key insights from the Nonviolent Action Lab’s December 2025 convening on how artificial intelligence can empower pro-democracy movements.
After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.