Occasional Paper  

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Photo by Kateryna Kovarzh

Abstract

As AI becomes increasingly embedded into every aspect of our lives, there is evidence that people are turning to these systems for guidance on complex issues and moral dilemmas. Whether or not one agrees that people should do so, the fact that they are necessitates a clearer understanding of the moral reasoning of these systems. To address this gap, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Be Trusted? introduces an ethical-moral intelligence (EMI) framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency. We present findings from a pre-registered experiment testing the moral sensitivity in four AI models (Claude, GPT, Llama, and DeepSeek) using ethically challenging scenarios. While models demonstrate moral sensitivity to ethical dilemmas in ways that closely mimic human responses, they exhibit greater certainty than humans when choosing between conflicting sacred values, despite recognizing such tragic trade-offs as difficult. This discrepancy between reported difficulty and decisiveness raises important questions about their coherence and transparency, undermining trustworthiness. The research reveals a critical need for more comprehensive ethical evaluation of AI systems. We discuss the implications of these specific findings, how psychological methods might be applied to understand the ethical-moral intelligence of AI models, and outline recommendations for developing more ethically aware AI that augments human moral reasoning.

 

Sarah Hubbard is the Associate Director for Technology & Democracy at the Ash Center’s Allen Lab for Democracy Renovation and was previously a Technology & Public Purpose Fellow at the Belfer Center. 

David Kidd is a member of the Ash Center’s Allen Lab for Democracy Renovation in addition to working for Harvard University’s Edmond and Lily Safra Center. 

Andrei Stupu was a previous Allen Lab for Democracy Renovation fellow at the Ash Center. 

The views expressed in this article are those of the author(s) alone and do not necessarily represent the positions of the Ash Center or its affiliates.

Related Resources

AI & Democracy: Perspectives from an Emerging Field

Additional Resource

AI & Democracy: Perspectives from an Emerging Field

The Allen Lab is proud to have contributed to this timely landscape report from The David & Lucile Packard Foundation mapping the emerging field of AI and democracy.

The Present — and Future — of Alternatives to Police

Commentary

The Present — and Future — of Alternatives to Police

Allen Lab Affiliate Benjamin A. Barsky examines alternative emergency response programs — arguing for a democratic model of public safety governance in which responses to nonviolent incidents are shared across government and civil society rather than dominated by police.

More on this Issue

AI & Democracy: Perspectives from an Emerging Field

Additional Resource

AI & Democracy: Perspectives from an Emerging Field

The Allen Lab is proud to have contributed to this timely landscape report from The David & Lucile Packard Foundation mapping the emerging field of AI and democracy.

Terms of Engagement – Trump and His Billionaire Allies Make their Move on the Media

Podcast

Terms of Engagement – Trump and His Billionaire Allies Make their Move on the Media

Harvard Kennedy School Shorenstein Center director and former TIME editor Nancy Gibbs joins co-hosts Archon Fung and Stephen Richer to discuss the impacts of billionaire media consolidation and pressure from the Trump administration on the flow of information vital to democracy.