Policy Brief  

AGI and Democracy

We face a fundamental question: is the very pursuit of Artificial General Intelligence (AGI) the kind of aim democracies should allow?

Photo Credit: Gertrūda Valasevičiūtė, Unsplash

If we are a long way short of Artificial General Intelligence (AGI), why worry about it now?

Seth Lazar and Alex Pascal argue that the people building the most advanced AI systems are explicitly and aggressively working to bring AGI about, and they think they’ll get there in two to five years. Even some of the most publicly skeptical AI researchers don’t rule out AGI within this decade. If we, the affected public, do not actively shape this agenda now, we may miss the chance to do so at all. We face a fundamental question: is the very pursuit of AGI the kind of aim democracies should allow?

More from this Program

The Ecosystem of Deliberative Technologies for Public Input

Additional Resource

The Ecosystem of Deliberative Technologies for Public Input

Ensuring public opinion and policy preferences are reflected in policy outcomes is essential to a functional democracy. A growing ecosystem of deliberative technologies aims to improve the input-to-action loop between people and their governments.

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

More on this Issue

The Ecosystem of Deliberative Technologies for Public Input

Additional Resource

The Ecosystem of Deliberative Technologies for Public Input

Ensuring public opinion and policy preferences are reflected in policy outcomes is essential to a functional democracy. A growing ecosystem of deliberative technologies aims to improve the input-to-action loop between people and their governments.

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.