
Additional Resource
Experiential Civic Learning for American Democracy
A new report provides a clear, actionable framework for effective experiential civic learning—what it is, why it matters, and how to do it well.
Commentary
The GETTING-Plurality Research Network recently submitted a public comment on the NIST U.S. AI Safety Institute’s “Managing Misuse Risk for Dual-Use Foundation Models” draft guidance. The full text of the public comment can be found below.
Below you will find data on the Plurarity Public Comment on U.S. AI Safety Institute Guidance. Public Commenting is an important opportunity to have a voice on the topic at hand, and essential to providing input in the development of effective rules and regulations that serve the community.
The U.S. AI Safety Institute’s guidelines for managing AI misuse risks are commendable, especially their focus on mitigating risks before deployment. The principles in Objectives 5 and 6, which emphasize both pre-deployment safety and ongoing post-deployment monitoring, are particularly strong. The recommendations for independent third-party safety research, external safety testing, and internal reporting protections are also welcomed. Overall, the draft guidance offers a solid foundation for ensuring the safety of dual-use AI models for the public.
The GETTING-Plurality Research Network appreciates the opportunity to provide feedback on the “Managing Misuse Risk for Dual-Use Foundation Models” draft guidance. We’ve compiled comments below from members of our research network.
We commend the leadership of the U.S. AI Safety Institute for establishing guidelines for managing misuse risk and, importantly, for its bedrock principle that risk be properly managed and mitigated before AI deployment. This is a positive step forward for AI safety and a promising direction for the development standards which should exist in the field. In particular, we felt the recommendations in Objectives 5 and 6 were especially strong. We appreciated the recognition of the full lifecycle of AI to ensure safety – not only considering the pre-deployment stages of AI development for safety, but also the emphasis on post-deployment monitoring and response. We agree with the need to provide safe harbors for independent third-party safety research, the need to establish a robust regime of both external safety testing of models as well as protections for internal reporting of safety concerns, and the creation of other internal processes and norms that will set an organization up for success. We believe that this draft guidance is a strong starting point for guidelines needed to ensure that dual-use foundation models are as safe as possible for the public.
Below, we offer a few suggestions for your consideration in revision which we believe will further strengthen the guidance:
The comments provided are from members of our research community. For any additional information, please reach out to Sarah Hubbard (sarah_hubbard@hks.harvard.edu).
Additional Resource
A new report provides a clear, actionable framework for effective experiential civic learning—what it is, why it matters, and how to do it well.
Additional Resource
The bipartisan Utah Digital Choice Act aims to reform the social media ecosystem by giving users more choice and ownership over their personal data, while encouraging platform innovation and competition.
Policy Brief
The GETTING-Plurality Research Network submitted a public comment on the Development of a 2025 National Artificial Intelligence Research and Development Strategic Plan.
Policy Brief
The GETTING-Plurality Research Network submitted a public comment on the Development of a 2025 National Artificial Intelligence Research and Development Strategic Plan.
Feature
What kind of democracy do legislators want? This question was at the center of a recent discussion with Melody Crowder-Meyer, associate professor of political science at Davidson College, as part of the American Politics Speaker Series.
Policy Brief
The GETTING-Plurality Research Network submitted a comment to Representative Trahan’s Request for Information to modernize the Privacy Act of 1974.