
Policy Brief
GETTING-Plurality Comments on Modernizing the Privacy Act of 1974
The GETTING-Plurality Research Network submitted a comment to Representative Trahan’s Request for Information to modernize the Privacy Act of 1974.
Commentary
The GETTING-Plurality Research Network recently submitted a public comment on the NIST U.S. AI Safety Institute’s “Managing Misuse Risk for Dual-Use Foundation Models” draft guidance. The full text of the public comment can be found below.
Below you will find data on the Plurarity Public Comment on U.S. AI Safety Institute Guidance. Public Commenting is an important opportunity to have a voice on the topic at hand, and essential to providing input in the development of effective rules and regulations that serve the community.
The U.S. AI Safety Institute’s guidelines for managing AI misuse risks are commendable, especially their focus on mitigating risks before deployment. The principles in Objectives 5 and 6, which emphasize both pre-deployment safety and ongoing post-deployment monitoring, are particularly strong. The recommendations for independent third-party safety research, external safety testing, and internal reporting protections are also welcomed. Overall, the draft guidance offers a solid foundation for ensuring the safety of dual-use AI models for the public.
The GETTING-Plurality Research Network appreciates the opportunity to provide feedback on the “Managing Misuse Risk for Dual-Use Foundation Models” draft guidance. We’ve compiled comments below from members of our research network.
We commend the leadership of the U.S. AI Safety Institute for establishing guidelines for managing misuse risk and, importantly, for its bedrock principle that risk be properly managed and mitigated before AI deployment. This is a positive step forward for AI safety and a promising direction for the development standards which should exist in the field. In particular, we felt the recommendations in Objectives 5 and 6 were especially strong. We appreciated the recognition of the full lifecycle of AI to ensure safety – not only considering the pre-deployment stages of AI development for safety, but also the emphasis on post-deployment monitoring and response. We agree with the need to provide safe harbors for independent third-party safety research, the need to establish a robust regime of both external safety testing of models as well as protections for internal reporting of safety concerns, and the creation of other internal processes and norms that will set an organization up for success. We believe that this draft guidance is a strong starting point for guidelines needed to ensure that dual-use foundation models are as safe as possible for the public.
Below, we offer a few suggestions for your consideration in revision which we believe will further strengthen the guidance:
The comments provided are from members of our research community. For any additional information, please reach out to Sarah Hubbard (sarah_hubbard@hks.harvard.edu).
Policy Brief
The GETTING-Plurality Research Network submitted a comment to Representative Trahan’s Request for Information to modernize the Privacy Act of 1974.
Commentary
Allen Lab for Democracy Renovation Fellow Dr. Shlomit Wagman lays out a framework to address the threats artificial intelligence poses to global security and democratic institutions.
Additional Resource
In a recent piece for Tech Policy Press, Allen Lab Senior Fellow Alex Pascal and Nathan Sanders outline how US states are well-positioned to lead the development of Public AI. State governments can act as “laboratories of twenty-first century democracy” to experiment with AI applications that directly benefit citizens.
Feature
What kind of democracy do legislators want? This question was at the center of a recent discussion with Melody Crowder-Meyer, associate professor of political science at Davidson College, as part of the American Politics Speaker Series.
Policy Brief
The GETTING-Plurality Research Network submitted a comment to Representative Trahan’s Request for Information to modernize the Privacy Act of 1974.
Commentary
At a recent Ash Center panel, experts and AI developers discuss how AI’s influence on politics has evolved over the years. They examine the new tools available to politicians, the role of humans in AI’s relationship with governance, and the values guiding the design of these technologies.