Democracy as Approximation: A Primer for “AI for Democracy” Innovators
This essay was adopted from a presentation given by Aviv Ovadya at the Second Interdisciplinary Workshop on Reimagining Democracy held on the campus of Harvard Kennedy School in December 2023.
This essay was adapted from a presentation given by Aviv Ovadya at the Second Interdisciplinary Workshop on Reimagining Democracy held on the campus of Harvard Kennedy School in December 2023. Convened with support from the Ash Center for Democratic Governance and Innovation and the Belfer Center for Science and International Affairs, the conference was intended to bring together a diverse set of thinkers and practitioners to talk about how democracy might be reimagined for the twenty-first century.
Can we accelerate democratic innovation by recalling that real-world democracy is an imperfect approximation of an ideal? How might AI help?
“No matter what form of democracy we are aiming for, we should remember that all democracy is, by necessity, an imperfect approximation of an ideal worth striving for. The only question is what kind of democracy we choose to cultivate with the resources that we can bring to bear, ” writes Ovadya.
Allen Lab Fellow Hillary Lehr convened a Voter Experience Summit at Harvard’s Ash Center in March, bringing together 25 cross-sector experts to rigorously map the voter journey. This essay explores how that collaborative process could lay the groundwork for new interventions to understand and improve the experience of voting for all.
After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.
Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?
Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.
A new report summarizes key insights from the Nonviolent Action Lab’s December 2025 convening on how artificial intelligence can empower pro-democracy movements.
After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.
Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?
Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.