Policy Brief  

GETTING-Plurality Comments on Modernizing the Privacy Act of 1974

The GETTING-Plurality Research Network submitted a comment to Representative Trahan’s Request for Information to modernize the Privacy Act of 1974.

Photo by Unsplash, Samuel Schroth

Ajeet Singh is a Physician Instructor and Clinical Informaticist at Rush University Medical Center. 

Anna Lewis is a bioethicist focusing on the Ethical, Legal and Social Implications of Genetics and Genomics (ELSI).

Sarah Hubbard, former Technology & Public Purpose Fellow at the Belfer Center under Secretary Ash Carter, is the Associate Director for Technology & Democracy at the Ash Center’s Allen Lab for Democracy Renovation.  

Allison Stanger is a Senior Fellow at the Allen Lab for Democracy Renovation and Co-Director and Co-Investigator of the GETTING-Plurality Research Network.

The views expressed in this article are those of the author(s) alone and do not necessarily represent the positions of the Ash Center or its affiliates. 

Related Resources

Voter Experience Summit Recap

Commentary

Voter Experience Summit Recap

Allen Lab Fellow Hillary Lehr convened a Voter Experience Summit at Harvard’s Ash Center in March, bringing together 25 cross-sector experts to rigorously map the voter journey. This essay explores how that collaborative process could lay the groundwork for new interventions to understand and improve the experience of voting for all.

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.

More on this Issue

AI for Democracy Movements: Toward a New Agenda
A cover photo of the report.

Policy Brief

AI for Democracy Movements: Toward a New Agenda

A new report summarizes key insights from the Nonviolent Action Lab’s December 2025 convening on how artificial intelligence can empower pro-democracy movements.

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.