Commentary  

Disclosure Dilemmas: AI Transparency is No Quick Fix

In a new essay, Mary Graham argues that transparency measures can help curtail AI-related risks but not overnight — transparency efforts require sustained, long-term engagement and effort.

photo of book spines

On July 21, President Biden announced commitments by seven leading companies to manage the public risks of artificial intelligence by providing greater transparency. His announcement is the latest instance of U.S. and European governments’ optimistic faith that providing more information about AI platforms will prove a reliable way of assuring public safety. Unfortunately, long experience has shown that constructing even simple disclosure measures that actually succeed in reducing public risks is a devilishly difficult task. Creating effective transparency when AI technology is advancing rapidly, applications are diverse, and risks are uncertain is uncharted territory. Understanding the difficulties will improve the chances of success.

Since the release of ChatGPT on November 30, 2022, more than 100 million people have used the engaging platform to tackle business issues, organize term papers, solve math problems, learn new sports, write poetry, plan dinner, diagnose health problems, size up political candidates, and otherwise enrich their lives. At the same time, many fear that artificial intelligence platforms might invade privacy, steal intellectual property, encourage financial crimes, mislead voters, create false identities, discriminate against vulnerable groups, and threaten national security.

Given this mix of benefits, dangers, and uncertainties, transparency has seemed like a reasonable place to start. In June the European Parliament updated draft legislation to require companies that do business in the European Union to disclose information based on the degree of risk their AI applications create. Disclosures may include revealing when content is AI-generated, registering AI systems with a new EU database, providing a summary of copyrighted material used in training the systems, and publishing risk assessments.1 The EU’s 2022 Digital Services Act already requires large online platforms to assess risks, reveal how content came to be recommended, undergo independent audits, and give regulators access to algorithms.2

President Biden’s July 21 announcement included companies’ commitments to greater transparency in security testing, risk management practices, platform capabilities and limits, and the technology’s use.3 A 2020 executive order requires federal agencies to disclose when and how officials are using AI,4 and the Biden administration has listed transparency as one of five principles of a proposed AI bill of rights.5 In the works are policies for agency applications of the technology.6 Meanwhile, Congress is at work on bi-partisan legislation. In recent hearings, Senators suggested creating a “nutritional label” for AI systems.7

Reducing public risks by requiring disclosure of information is not a new idea. In the 1930s, investors’ devastating losses during the Depression triggered requirements that publicly traded companies disclose financial risks. Congress requires automakers to post safety ratings on new-car stickers. Food companies must include nutritional labels on every can of soup and box of cereal. New rules require hospitals to disclose the prices of their services and broadband providers to reveal prices and speeds.8

The idea behind all these measures is that standardized, comparable information will inform consumers’ choices and lead companies to create healthier products, safer workplaces, less risky investments – and now perhaps safer AI platforms. They reflect an enduring political consensus that markets do not always provide all the information that shoppers and workers need.

But much can go wrong with transparency policies. Like other government regulations, they are political compromises. Their architecture can be gerrymandered to promote some companies’ interests over others. They can create new inequalities by making information accessible to some people but not to others.9

Companies may provide new information in ways that are not useful, accessible, or understood. Busy consumers may not have a real choice or may lack the time or interest to switch to safer products. Even if they do, companies may not have the capacity or interest to lower risks in response. The important question becomes whether public information gains in scope, accuracy, and use over time.

AI regulators need to build their rules with an understanding that transparency is not a quick fix. History suggests that requiring public information in ways that lead to safer products and services can take years or decades. It calls for a long term commitment to engaging consumers in its improvement, to waging needed battles against attempts to game the system, and to monitoring and improving the disclosure rules based on experience.

Consider the history of corporate financial reporting. The Securities and Exchange Acts of 1933 and 1934 did not initially provide standardized reporting or include railroads and banks. Felix Frankfurter, then an advisor to President Franklin Roosevelt, called the reporting requirement “a modest first installment” in protecting the public. Over time, however, shareholders and investors supported its expansion and public companies found data useful. An ecosystem of accountants, auditors, and rating agencies acquired an interest in its viability and successive crises led Congress to strengthen its scope and accuracy.10 Later, accountants took the lead in proposing international accounting standards that have now been adopted by 168 countries.11

Consensus may be forming around AI transparency based on risk categories of applications, with requirements to disclose its use, training, safety protocols, and risk assessments. Regulators can build in constant monitoring of disclosure for effectiveness, frequent updates, and enforcement to minimize evasion. These steps would provide, as Felix Frankfurter put it, at least a modest first installment.

More from this Program

What led to the rise — and then fall — of participatory democracy in Colombia?
A ballot box reads

Feature

What led to the rise — and then fall — of participatory democracy in Colombia?

Research by Democracy Postdoctoral Fellow Jamie Shenk highlights how referendums in Colombia served as a powerful tool to block the expansion of mining and oil enterprises before the practice was curbed by the country’s Supreme Court.

Avoiding conflict over conflicts of interest
A sign reads,

Feature

Avoiding conflict over conflicts of interest

Developing and enforcing conflict of interest policies is no simple task for anti-corruption advocates and ethics officials alike. Archon Fung and Dennis Thompson help to better understand the problem and examine when risk is underestimated and when it is overestimated.

More on this Issue

Conference on the Political Economy of AI Podcast Episodes
Photo of participants from the conference.

Podcast

Conference on the Political Economy of AI Podcast Episodes

Check out the podcast episodes from the Allen Lab for Democracy Renovation’s Conference on the Political Economy of AI to glean insights from each panel.

Using AI for Political Polling
A chart with blue and red columns is set against a backdrop of zeros and ones

Commentary

Using AI for Political Polling

Will AI-assisted polls soon replace more traditional techniques?

AI and Democracy Summer Reading List
Graphic that includes all of the book covers mentioned in this list.

Feature

AI and Democracy Summer Reading List

This list, curated by the GETTING-Plurality Research Network at the Allen Lab for Democracy Renovation, highlights a mix of foundational texts and new thinking on the timely issue of how AI will impact democracy, especially as we head into election season.