Commentary  

Disclosure Dilemmas: AI Transparency is No Quick Fix

In a new essay, Mary Graham argues that transparency measures can help curtail AI-related risks but not overnight — transparency efforts require sustained, long-term engagement and effort.

photo of book spines

On July 21, President Biden announced commitments by seven leading companies to manage the public risks of artificial intelligence by providing greater transparency. His announcement is the latest instance of U.S. and European governments’ optimistic faith that providing more information about AI platforms will prove a reliable way of assuring public safety. Unfortunately, long experience has shown that constructing even simple disclosure measures that actually succeed in reducing public risks is a devilishly difficult task. Creating effective transparency when AI technology is advancing rapidly, applications are diverse, and risks are uncertain is uncharted territory. Understanding the difficulties will improve the chances of success.

Since the release of ChatGPT on November 30, 2022, more than 100 million people have used the engaging platform to tackle business issues, organize term papers, solve math problems, learn new sports, write poetry, plan dinner, diagnose health problems, size up political candidates, and otherwise enrich their lives. At the same time, many fear that artificial intelligence platforms might invade privacy, steal intellectual property, encourage financial crimes, mislead voters, create false identities, discriminate against vulnerable groups, and threaten national security.

Given this mix of benefits, dangers, and uncertainties, transparency has seemed like a reasonable place to start. In June the European Parliament updated draft legislation to require companies that do business in the European Union to disclose information based on the degree of risk their AI applications create. Disclosures may include revealing when content is AI-generated, registering AI systems with a new EU database, providing a summary of copyrighted material used in training the systems, and publishing risk assessments.1 The EU’s 2022 Digital Services Act already requires large online platforms to assess risks, reveal how content came to be recommended, undergo independent audits, and give regulators access to algorithms.2

President Biden’s July 21 announcement included companies’ commitments to greater transparency in security testing, risk management practices, platform capabilities and limits, and the technology’s use.3 A 2020 executive order requires federal agencies to disclose when and how officials are using AI,4 and the Biden administration has listed transparency as one of five principles of a proposed AI bill of rights.5 In the works are policies for agency applications of the technology.6 Meanwhile, Congress is at work on bi-partisan legislation. In recent hearings, Senators suggested creating a “nutritional label” for AI systems.7

Reducing public risks by requiring disclosure of information is not a new idea. In the 1930s, investors’ devastating losses during the Depression triggered requirements that publicly traded companies disclose financial risks. Congress requires automakers to post safety ratings on new-car stickers. Food companies must include nutritional labels on every can of soup and box of cereal. New rules require hospitals to disclose the prices of their services and broadband providers to reveal prices and speeds.8

The idea behind all these measures is that standardized, comparable information will inform consumers’ choices and lead companies to create healthier products, safer workplaces, less risky investments – and now perhaps safer AI platforms. They reflect an enduring political consensus that markets do not always provide all the information that shoppers and workers need.

But much can go wrong with transparency policies. Like other government regulations, they are political compromises. Their architecture can be gerrymandered to promote some companies’ interests over others. They can create new inequalities by making information accessible to some people but not to others.9

Companies may provide new information in ways that are not useful, accessible, or understood. Busy consumers may not have a real choice or may lack the time or interest to switch to safer products. Even if they do, companies may not have the capacity or interest to lower risks in response. The important question becomes whether public information gains in scope, accuracy, and use over time.

AI regulators need to build their rules with an understanding that transparency is not a quick fix. History suggests that requiring public information in ways that lead to safer products and services can take years or decades. It calls for a long term commitment to engaging consumers in its improvement, to waging needed battles against attempts to game the system, and to monitoring and improving the disclosure rules based on experience.

Consider the history of corporate financial reporting. The Securities and Exchange Acts of 1933 and 1934 did not initially provide standardized reporting or include railroads and banks. Felix Frankfurter, then an advisor to President Franklin Roosevelt, called the reporting requirement “a modest first installment” in protecting the public. Over time, however, shareholders and investors supported its expansion and public companies found data useful. An ecosystem of accountants, auditors, and rating agencies acquired an interest in its viability and successive crises led Congress to strengthen its scope and accuracy.10 Later, accountants took the lead in proposing international accounting standards that have now been adopted by 168 countries.11

Consensus may be forming around AI transparency based on risk categories of applications, with requirements to disclose its use, training, safety protocols, and risk assessments. Regulators can build in constant monitoring of disclosure for effectiveness, frequent updates, and enforcement to minimize evasion. These steps would provide, as Felix Frankfurter put it, at least a modest first installment.

More from this Program

Moving beyond the Electoral College
Congressman Jamie Raskin speaks at an Ash Center conference on the Electoral College

Feature

Moving beyond the Electoral College

At an Ash Center symposium on Electoral College reform, Congressman Jamie Raskin makes the case that the US should finally move to a direct popular vote for selecting presidential winners.

The Electoral College: What’s to be Done
A presidential electoral in Washington State ceremonially signs an electoral college ballot

Feature

The Electoral College: What’s to be Done

During an opening panel at an Ash Center symposium on the future of the Electoral College, scholars examined the history behind how the US adopted its peculiar centuries-old system of choosing presidential election winners – and what should be done to reform or even abolish the practice today.

The Future of the Electoral College: A Conversation with Congressman Jamie Raskin
Photo of Jamie Raskin standing at the podium

Video

The Future of the Electoral College: A Conversation with Congressman Jamie Raskin

Harvard-ID holders were invited to join the Ash Center for Democratic Governance and Innovation and the Institute of Politics for a conversation with Congressman Jamie Raskin (MD-08) about the future of the Electoral College.

More on this Issue

AI and the Future of Privacy

Video

AI and the Future of Privacy

The GETTING-Plurality Research Network at the Ash Center’s Allen Lab and Connection Science at MIT Media Lab hosted a webinar event focused on “AI and the Future of Privacy”. In this session, we hear from Bruce Schneier, security technologist, and Faculty Affiliate at the Ash Center; Sarah Roth-Gaudette, Executive Director of Fight for the Future; and Tobin South, MIT Ph.D. Candidate and Fulbright Scholar. Each presenter gives a lightning talk, followed by audience Q&A.

AGI and Democracy

Policy Brief

AGI and Democracy

We face a fundamental question: is the very pursuit of Artificial General Intelligence (AGI) the kind of aim democracies should allow?