Additional Resource  

Summit on AI and Democracy

On November 7, 2023, the Summit on AI and Democracy gathered experts across multiple institutions to discuss ongoing research, policy, and development efforts related to the recent advancements in artificial intelligence.

About the Convening

On November 7, 2023, the Summit on AI and Democracy gathered experts across multiple institutions to discuss ongoing research, policy, and development efforts related to the recent advancements in artificial intelligence. The Summit was hosted by the GETTING-Plurality Research Network, a part of the Allen Lab for Democracy Renovation housed at the Harvard Kennedy School’s Ash Center for Democratic Governance and Innovation.

The Summit was composed of various lightning talks and breakout groups, with a focus on the following topics:

  • The opportunities and challenges that technologies, such as AI, pose to democracy, and how we might govern these systems.
  • Research and emerging methods for incorporating greater democratic input into the development of AI.
  • New developments and tools in support of democracy, plurality, and collective intelligence– explorations for how we might use these innovations for good.

Featured Presenters

  • Danielle Allen
  • Allison Stanger
  • Eric Beerbohm
  • Jonathan Zittrain
  • Tina Eliassi-Rad
  • Aviv Ovadya
  • Shrey Jain
  • Beth Noveck
  • Divya Siddarth
  • Deep Ganguli
  • Amy Larsen
  • Kasia Sitkiewicz
  • Madeleine Daepp
  • Seth Lazar
  • Hélène Landemore
  • Glen Weyl
  • Bruce Schneier
  • Puja Ohlhaver
  • Vivian Chen
  • Wes Chow
  • Kinney Salesne
  • Nick Pyati
  • Glen Weyl
  • Luke Thorburn
  • Alex Pascal
  • Allison Stanger
  • Archon Fung

Event Photos

01 of 07

Related Resources

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.

More on this Issue

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.

AI & Democracy: Perspectives from an Emerging Field

Additional Resource

AI & Democracy: Perspectives from an Emerging Field

The Allen Lab is proud to have contributed to this timely landscape report from The David & Lucile Packard Foundation mapping the emerging field of AI and democracy.