Video  

The Dark Side of AI: Crime and Adversarial Use Cases

“The Dark Side of AI: Crime and Adversarial Use Cases” webinar session featured the following speakers and topics:

  • Bruce Schneier (Harvard): Hackers and Security Vulnerabilities
  • Matt Groh (Northwestern): Deepfakes and Misinformation, see related paper The Art and Science of Generative AI
  • Shlomit Wagman (Harvard): Financial Crime
  • Jennifer Calvery (HSBC): Financial Crime

Related Resources

Q & A: Crocodile tears, Can the ethical-moral intelligence of AI models be trusted?
A hand pressing a tablet using AI.

Q+A

Q & A: Crocodile tears, Can the ethical-moral intelligence of AI models be trusted?

As artificial intelligence becomes more embedded in everyday decision-making, its role in shaping how people think about ethics and morality is drawing increasing scrutiny. In this conversation with researcher Sarah Hubbard, we discuss insights from her co-authored paper, “Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?—examining how AI systems respond to moral dilemmas, and what this reveals about the risks, limitations, and need for greater transparency and human oversight in AI-driven ethical guidance.

Bootstrap Blackness: Black Men, Conservatism, and Party Politics
A man voting.

Article

Bootstrap Blackness: Black Men, Conservatism, and Party Politics

A new research article by Dr. Christine Slaughter, Research Fellow at the Allen Lab for Democracy Renovation and co-authors examines the narrative of black men’s political “shift right”. The study finds Black men remain overwhelmingly Democratic, despite growing public attention to ideological divides.

 

Voter Experience Summit Recap

Commentary

Voter Experience Summit Recap

Allen Lab Fellow Hillary Lehr convened a Voter Experience Summit at Harvard’s Ash Center in March, bringing together 25 cross-sector experts to rigorously map the voter journey. This essay explores how that collaborative process could lay the groundwork for new interventions to understand and improve the experience of voting for all.

More on this Issue

Q & A: Crocodile tears, Can the ethical-moral intelligence of AI models be trusted?
A hand pressing a tablet using AI.

Q+A

Q & A: Crocodile tears, Can the ethical-moral intelligence of AI models be trusted?

As artificial intelligence becomes more embedded in everyday decision-making, its role in shaping how people think about ethics and morality is drawing increasing scrutiny. In this conversation with researcher Sarah Hubbard, we discuss insights from her co-authored paper, “Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?—examining how AI systems respond to moral dilemmas, and what this reveals about the risks, limitations, and need for greater transparency and human oversight in AI-driven ethical guidance.

AI for Democracy Movements: Toward a New Agenda
A cover photo of the report.

Policy Brief

AI for Democracy Movements: Toward a New Agenda

A new report summarizes key insights from the Nonviolent Action Lab’s December 2025 convening on how artificial intelligence can empower pro-democracy movements.

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.