Additional Resource  

Democracy as Approximation: A Primer for “AI for Democracy” Innovators

This essay was adopted from a presentation given by Aviv Ovadya at the Second Interdisciplinary Workshop on Reimagining Democracy held on the campus of Harvard Kennedy School in December 2023.

This essay was adapted from a presentation given by Aviv Ovadya at the Second Interdisciplinary Workshop on Reimagining Democracy held on the campus of Harvard Kennedy School in December 2023. Convened with support from the Ash Center for Democratic Governance and Innovation and the Belfer Center for Science and International Affairs, the conference was intended to bring together a diverse set of thinkers and practitioners to talk about how democracy might be reimagined for the twenty-first century.

Can we accelerate democratic innovation by recalling that real-world democracy is an imperfect approximation of an ideal? How might AI help?

“No matter what form of democracy we are aiming for, we should remember that all democracy is, by necessity, an imperfect approximation of an ideal worth striving for. The only question is what kind of democracy we choose to cultivate with the resources that we can bring to bear, ” writes Ovadya.

Related Resources

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.

Storytelling Pathways to Civics Engagement

Additional Resource

Storytelling Pathways to Civics Engagement

Watch Roadtrip Nation’s Living Civics documentary and hear from leading universal civic learning experts on the power of narrative for civic engagement.

More on this Issue

AI for Democracy Movements: Toward a New Agenda
A cover photo of the report.

Policy Brief

AI for Democracy Movements: Toward a New Agenda

A new report summarizes key insights from the Nonviolent Action Lab’s December 2025 convening on how artificial intelligence can empower pro-democracy movements.

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.