Truth from the machine: artificial intelligence and the materialization of identity

When AI is seen as a source of truth and scientific knowledge, it may lend public legitimacy to harmful ideas about identity.

Access the Article

Critics now articulate their worries about the technologies, social practices and mythologies that comprise Artificial Intelligence (AI) in many domains. In this paper, we investigate the intersection of two domains of criticism: identity and scientific knowledge. On one hand, critics of AI in public policy emphasise its potential to discriminate on the basis of identity. On the other hand, critics of AI in scientific realms worry about how it may reorient or disorient research practices and the progression of scientific inquiry. We link the two sets of concerns—around identity and around knowledge—through a series of case studies. In our case studies, about autism and homosexuality, AI figures as part of scientific attempts to find, and fix, forms of identity. Our case studies are instructive: they show that when AI is deployed in scientific research about identity and personality, it can naturalise and reinforce biases. The identity-based and epistemic concerns about AI are not distinct. When AI is seen as a source of truth and scientific knowledge, it may lend public legitimacy to harmful ideas about identity.

More from this Program

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.

More on this Issue

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.

Digital Civic Infrastructure for Massachusetts Workshop

Feature

Digital Civic Infrastructure for Massachusetts Workshop

The Allen Lab for Democracy Renovation and Bloomberg Center for Cities brought together civic technologists, researchers, as well as municipal and state leaders across Massachusetts for a workshop on digital civic infrastructure.