Commentary  

Disclosure Dilemmas: AI Transparency is No Quick Fix

In a new essay, Mary Graham argues that transparency measures can help curtail AI-related risks but not overnight — transparency efforts require sustained, long-term engagement and effort.

photo of book spines

On July 21, President Biden announced commitments by seven leading companies to manage the public risks of artificial intelligence by providing greater transparency. His announcement is the latest instance of U.S. and European governments’ optimistic faith that providing more information about AI platforms will prove a reliable way of assuring public safety. Unfortunately, long experience has shown that constructing even simple disclosure measures that actually succeed in reducing public risks is a devilishly difficult task. Creating effective transparency when AI technology is advancing rapidly, applications are diverse, and risks are uncertain is uncharted territory. Understanding the difficulties will improve the chances of success.

Since the release of ChatGPT on November 30, 2022, more than 100 million people have used the engaging platform to tackle business issues, organize term papers, solve math problems, learn new sports, write poetry, plan dinner, diagnose health problems, size up political candidates, and otherwise enrich their lives. At the same time, many fear that artificial intelligence platforms might invade privacy, steal intellectual property, encourage financial crimes, mislead voters, create false identities, discriminate against vulnerable groups, and threaten national security.

Given this mix of benefits, dangers, and uncertainties, transparency has seemed like a reasonable place to start. In June the European Parliament updated draft legislation to require companies that do business in the European Union to disclose information based on the degree of risk their AI applications create. Disclosures may include revealing when content is AI-generated, registering AI systems with a new EU database, providing a summary of copyrighted material used in training the systems, and publishing risk assessments.1 The EU’s 2022 Digital Services Act already requires large online platforms to assess risks, reveal how content came to be recommended, undergo independent audits, and give regulators access to algorithms.2

President Biden’s July 21 announcement included companies’ commitments to greater transparency in security testing, risk management practices, platform capabilities and limits, and the technology’s use.3 A 2020 executive order requires federal agencies to disclose when and how officials are using AI,4 and the Biden administration has listed transparency as one of five principles of a proposed AI bill of rights.5 In the works are policies for agency applications of the technology.6 Meanwhile, Congress is at work on bi-partisan legislation. In recent hearings, Senators suggested creating a “nutritional label” for AI systems.7

Reducing public risks by requiring disclosure of information is not a new idea. In the 1930s, investors’ devastating losses during the Depression triggered requirements that publicly traded companies disclose financial risks. Congress requires automakers to post safety ratings on new-car stickers. Food companies must include nutritional labels on every can of soup and box of cereal. New rules require hospitals to disclose the prices of their services and broadband providers to reveal prices and speeds.8

The idea behind all these measures is that standardized, comparable information will inform consumers’ choices and lead companies to create healthier products, safer workplaces, less risky investments – and now perhaps safer AI platforms. They reflect an enduring political consensus that markets do not always provide all the information that shoppers and workers need.

But much can go wrong with transparency policies. Like other government regulations, they are political compromises. Their architecture can be gerrymandered to promote some companies’ interests over others. They can create new inequalities by making information accessible to some people but not to others.9

Companies may provide new information in ways that are not useful, accessible, or understood. Busy consumers may not have a real choice or may lack the time or interest to switch to safer products. Even if they do, companies may not have the capacity or interest to lower risks in response. The important question becomes whether public information gains in scope, accuracy, and use over time.

AI regulators need to build their rules with an understanding that transparency is not a quick fix. History suggests that requiring public information in ways that lead to safer products and services can take years or decades. It calls for a long term commitment to engaging consumers in its improvement, to waging needed battles against attempts to game the system, and to monitoring and improving the disclosure rules based on experience.

Consider the history of corporate financial reporting. The Securities and Exchange Acts of 1933 and 1934 did not initially provide standardized reporting or include railroads and banks. Felix Frankfurter, then an advisor to President Franklin Roosevelt, called the reporting requirement “a modest first installment” in protecting the public. Over time, however, shareholders and investors supported its expansion and public companies found data useful. An ecosystem of accountants, auditors, and rating agencies acquired an interest in its viability and successive crises led Congress to strengthen its scope and accuracy.10 Later, accountants took the lead in proposing international accounting standards that have now been adopted by 168 countries.11

Consensus may be forming around AI transparency based on risk categories of applications, with requirements to disclose its use, training, safety protocols, and risk assessments. Regulators can build in constant monitoring of disclosure for effectiveness, frequent updates, and enforcement to minimize evasion. These steps would provide, as Felix Frankfurter put it, at least a modest first installment.

More from this Program

Beyond the Ballot: Ensuring a Transparent, Secure, and Fair Election in 2024
Someone holds up a sign that says

Feature

Beyond the Ballot: Ensuring a Transparent, Secure, and Fair Election in 2024

Election integrity is under the microscope as we near the 2024 Presidential Election, and many Americans are apprehensive about election security, the timeframe of learning the results, and how peaceful the transfer of power will be.

Election Officials in Swing States and the 2024 Election
Graphic of the US with headshots of all of the secretaries of state in purple pointing to their own state

Video

Election Officials in Swing States and the 2024 Election

The Ash Center hosted a discussion with the heads of elections from Arizona, Pennsylvania, Michigan, and North Carolina to hear about their actions to ensure the election process is smooth and can be trusted.

More on this Issue

Tech Policy that (Actually) Serves the People

Commentary

Tech Policy that (Actually) Serves the People

Allen Lab for Democracy Renovation Fellow Ami Fields-Meyer lays out research questions for developing a new U.S. tech policy agenda that puts people first.

The National Security Case for Public AI
tech background with a square in the middle

Occasional Paper

The National Security Case for Public AI

Allen Lab for Democracy Renovation Fellow Alex Pascal and Vanderbilt Law Professor Ganesh Sitaraman make the case that public options for AI and public utility-style regulation of AI will enhance national security by ensuring innovation and competition, preventing abuses of power and conflicts of interest, and advancing public interest and national security goals.