Policy Brief  

GETTING-Plurality Comments to White House OSTP on National Priorities for Artificial Intelligence

The GETTING-Plurality Research Network submitted a series of memos which respond to various questions posed around the topics of bolstering democracy and civic participation; protecting rights, safety, and national security; and promoting economic growth and good jobs.

The White House recently released a series of announcements focused on responsible artificial intelligence, including a White House Office of Science and Technology Policy request for information on national priorities for AI. The GETTING-Plurality Research Network submitted a series of memos which respond to various questions posed around the topics of bolstering democracy and civic participation; protecting rights, safety, and national security; and promoting economic growth and good jobs.

  • In the Bolstering Democracy memo, we outline how to harness the opportunities this technology presents for a flourishing democracy, while also strengthening our democratic institutions against the threats posed by AI.
  • In the National Security memo, we outline an analytical framework defining key concepts and hazard tiers for different categories of AI, organizational frameworks for regulation, and outline other regulatory frameworks and structures for implementation.
  • In the Economic Growth memo, we partnered with New America to develop a set of policy recommendations to promote economic growth and good jobs.

GETTING-Plurality looks forward to further engagement and discussion on AI governance.

Related Resources

After Neoliberalism: From Left to Right

Additional Resource

After Neoliberalism: From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.

Storytelling Pathways to Civics Engagement

Additional Resource

Storytelling Pathways to Civics Engagement

Watch Roadtrip Nation’s Living Civics documentary and hear from leading universal civic learning experts on the power of narrative for civic engagement.

More on this Issue

After Neoliberalism: From Left to Right

Additional Resource

After Neoliberalism: From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.

AI & Democracy: Perspectives from an Emerging Field

Additional Resource

AI & Democracy: Perspectives from an Emerging Field

The Allen Lab is proud to have contributed to this timely landscape report from The David & Lucile Packard Foundation mapping the emerging field of AI and democracy.