Policy Brief  

GETTING-Plurality Comments to White House OSTP on National Priorities for Artificial Intelligence

The GETTING-Plurality Research Network submitted a series of memos which respond to various questions posed around the topics of bolstering democracy and civic participation; protecting rights, safety, and national security; and promoting economic growth and good jobs.

The White House recently released a series of announcements focused on responsible artificial intelligence, including a White House Office of Science and Technology Policy request for information on national priorities for AI. The GETTING-Plurality Research Network submitted a series of memos which respond to various questions posed around the topics of bolstering democracy and civic participation; protecting rights, safety, and national security; and promoting economic growth and good jobs.

  • In the Bolstering Democracy memo, we outline how to harness the opportunities this technology presents for a flourishing democracy, while also strengthening our democratic institutions against the threats posed by AI.
  • In the National Security memo, we outline an analytical framework defining key concepts and hazard tiers for different categories of AI, organizational frameworks for regulation, and outline other regulatory frameworks and structures for implementation.
  • In the Economic Growth memo, we partnered with New America to develop a set of policy recommendations to promote economic growth and good jobs.

GETTING-Plurality looks forward to further engagement and discussion on AI governance.

More from this Program

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.

A Framework for Digital Civic Infrastructure

Additional Resource

A Framework for Digital Civic Infrastructure

Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action.

More on this Issue

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.

A Framework for Digital Civic Infrastructure

Additional Resource

A Framework for Digital Civic Infrastructure

Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action.