In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.
As AI becomes increasingly embedded into every aspect of our lives, there is evidence that people are turning to these systems for guidance on complex issues and moral dilemmas. Whether or not one agrees that people should do so, the fact that they are necessitates a clearer understanding of the moral reasoning of these systems. To address this gap, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Be Trusted? introduces an ethical-moral intelligence (EMI) framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency. We present findings from a pre-registered experiment testing the moral sensitivity in four AI models (Claude, GPT, Llama, and DeepSeek) using ethically challenging scenarios. While models demonstrate moral sensitivity to ethical dilemmas in ways that closely mimic human responses, they exhibit greater certainty than humans when choosing between conflicting sacred values, despite recognizing such tragic trade-offs as difficult. This discrepancy between reported difficulty and decisiveness raises important questions about their coherence and transparency, undermining trustworthiness. The research reveals a critical need for more comprehensive ethical evaluation of AI systems. We discuss the implications of these specific findings, how psychological methods might be applied to understand the ethical-moral intelligence of AI models, and outline recommendations for developing more ethically aware AI that augments human moral reasoning.
Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action.
Why I’m Excited About the White House’s Proposal for a Higher Ed Compact
Last week’s leak of the U.S. Department of Education’s proposed “Compact for Academic Excellence in Higher Education” drew intense reactions across academia. Critics call it government overreach threatening free expression, while supporters see a chance for reform and renewed trust between universities and policymakers. Danielle Allen, James Bryant Conant University Professor at Harvard University, director of the Democratic Knowledge Project and the Allen Lab for Democracy Renovation, weighs in.
Setting the 2025-26 Agenda for the Allen Lab for Democracy Renovation
Amid rising illiberalism, Danielle Allen urges a new agenda to renew democracy by reorienting institutions, policymaking, and civil society around the intentional sharing of power.
Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action.
Governing with AI – Learning the How-To’s of AI-Enhanced Public Engagement
Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement. Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively.