Policy Brief  

How AI Fails Us

Researchers and funders should redirect focus from centralized autonomous general intelligence to a plurality of established and emerging approaches that extend cooperative and augmentative traditions as seen in successes such as Taiwan’s digital democracy project to collective intelligence platforms like Wikipedia.

A human and robot hand touch

The dominant vision of artificial intelligence imagines a future of large-scale autonomous systems outperforming humans in an increasing range of fields. This “actually existing AI” vision misconstrues intelligence as autonomous rather than social and relational. It is both unproductive and dangerous, optimizing for artificial metrics of human replication rather than for systemic augmentation, and tending to concentrate power, resources, and decision-making in an engineering elite.  Alternative visions based on participating in and augmenting human creativity and cooperation have a long history and underlie many celebrated digital technologies such as personal computers and the internet.  Researchers and funders should redirect focus from centralized autonomous general intelligence to a plurality of established and emerging approaches that extend cooperative and augmentative traditions as seen in successes such as Taiwan’s digital democracy project to collective intelligence platforms like Wikipedia. We conclude with a concrete set of recommendations and a survey of alternative traditions.

More from this Program

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.

A Framework for Digital Civic Infrastructure

Additional Resource

A Framework for Digital Civic Infrastructure

Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action.

More on this Issue

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.

A Framework for Digital Civic Infrastructure

Additional Resource

A Framework for Digital Civic Infrastructure

Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action.