Commentary  

Who’s accountable for AI usage in digital campaign ads? Right now, no one.

In a new essay, Bruce Schneier and Nathan Sanders argue that AI is poised to dramatically ramp up digital campaigns and outline how accountability measures across platforms, government, and the media can curb risks.

photo of a bunch of red umbrella looking designs

If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns (even more) off the rails.

At issue is whether candidates using AI to create deepfaked media for political advertisements should be considered fraud or legitimate electioneering. That is, is it allowable to use AI image generators to create photorealistic images depicting Trump hugging Anthony Fauci? And is it allowable to use dystopic images generated by AI in political attack ads?

For now, the answer to these questions is probably “yes.” These are fairly innocuous uses of AI, not any different than the old-school approach of hiring actors and staging a photoshoot, or using video editing software. Even in cases where AI tools will be put to scurrilous purposes, that’s probably legal in the US system. Political ads are, after all, a medium in which you are explicitly permitted to lie.

The concern over AI is a distraction, but one that can help draw focus to the real issue. What matters isn’t how political content is generated; what matters is the content itself and how it is distributed.

Future uses of AI by campaigns go far beyond deepfaked images. Campaigns will also use AI to personalize communications. Whereas the previous generation of social media microtargeting was celebrated for helping campaigns reach a precision of thousands or hundreds of voters, the automation offered by AI will allow campaigns to tailor their advertisements and solicitations to the individual.

Most significantly, AI will allow digital campaigning to evolve from a broadcast medium to an interactive one. AI chatbots representing campaigns are capable of responding to questions instantly and at scale, like a town hall taking place in every voter’s living room, simultaneously. Ron DeSantis’ presidential campaign has reportedly already started using OpenAI’s technology to handle text message replies to voters.

At the same time, it’s not clear whose responsibility it is to keep US political advertisements grounded in reality—if it is anyone’s. The FEC’s role is campaign finance, and is further circumscribed by the Supreme Court’s repeated stripping of its authorities. The Federal Communications Commission has much more expansive responsibility for regulating political advertising in broadcast media, as well as political robocalls and text communications. However, the FCC hasn’t done much in recent years to curtail political spam. The Federal Trade Commission enforces truth in advertising standards, but political campaigns have been largely exempted from these requirements on First Amendment grounds.

To further muddy the waters, much of the online space remains loosely regulated, even as campaigns have fully embraced digital tactics. There are still insufficient disclosure requirements for digital ads. Campaigns pay influencers to post on their behalf to circumvent paid advertising rules. And there are essentially no rules beyond the simple use of disclaimers for videos that campaigns post organically on their own websites and social media accounts, even if they are shared millions of times by others.

Almost everyone has a role to play in improving this situation.

Let’s start with the platforms. Google announced earlier this month that it would require political advertisements on YouTube and the company’s other advertising platforms to disclose when they use AI images, audio, and video that appear in their ads. This is to be applauded, but we cannot rely on voluntary actions by private companies to protect our democracy. Such policies, even when well-meaning, will be inconsistently devised and enforced.

The FEC should use its limited authority to stem this coming tide. The FEC’s present consideration of rulemaking on this issue was prompted by Public Citizen, which petitioned the Commission to “clarify that the law against ‘fraudulent misrepresentation’ (52 U.S.C. §30124) applies to deliberately deceptive AI-produced content in campaign communications.” The FEC’s regulation against fraudulent misrepresentation (C.F.R. §110.16) is very narrow; it simply restricts candidates from pretending to be speaking on behalf of their opponents in a “damaging” way.

Extending this to explicitly cover deepfaked AI materials seems appropriate. We should broaden the standards to robustly regulate the activity of fraudulent misrepresentation, whether the entity performing that activity is AI or human—but this is only the first step. If the FEC takes up rulemaking on this issue, it could further clarify what constitutes “damage.” Is it damaging when a PAC promoting Ron DeSantis uses an AI voice synthesizer to generate a convincing facsimile of the voice of his opponent Donald Trump speaking his own Tweeted words? That seems like fair play. What if opponents find a way to manipulate the tone of the speech in a way that misrepresents its meaning? What if they make up words to put in Trump’s mouth? Those use cases seem to go too far, but drawing the boundaries between them will be challenging.

Congress has a role to play as well. Senator Klobuchar and colleagues have been promoting both the existing Honest Ads Act and the proposed REAL Political Ads Act, which would expand the FEC’s disclosure requirements for content posted on the Internet and create a legal requirement for campaigns to disclose when they have used images or video generated by AI in political advertising. While that’s worthwhile, it focuses on the shiny object of AI and misses the opportunity to strengthen law around the underlying issues. The FEC needs more authority to regulate campaign spending on false or misleading media generated by any means and published to any outlet. Meanwhile, the FEC’s own Inspector General continues to warn Congress that the agency is stressed by flat budgets that don’t allow it to keep pace with ballooning campaign spending.

It is intolerable for such a patchwork of commissions to be left to wonder which, if any of them, has jurisdiction to act in the digital space. Congress should legislate to make clear that there are guardrails on political speech and to better draw the boundaries between the FCC, FEC, and FTC’s roles in governing political speech. While the Supreme Court cannot be relied upon to uphold common sense regulations on campaigning, there are strategies for strengthening regulation under the First Amendment. And Congress should allocate more funding for enforcement.

The FEC has asked Congress to expand its jurisdiction, but no action is forthcoming. The present Senate Republican leadership is seen as an ironclad barrier to expanding the Commission’s regulatory authority. Senate Majority Leader Mitch McConnell has a decades-long history of being at the forefront of the movement to deregulate American elections and constrain the FEC. In 2003, he brought the unsuccessful Supreme Court case against the McCain-Feingold campaign finance reform act (the one that failed before the Citizens United case succeeded).

The most impactful regulatory requirement would be to require disclosure of interactive applications of AI for campaigns—and this should fall under the remit of the FCC. If a neighbor texts me and urges me to vote for a candidate, I might find that meaningful. If a bot does it under the instruction of a campaign, I definitely won’t. But I might find a conversation with the bot—knowing it is a bot—useful to learn about the candidate’s platform and positions, as long as I can be confident it is going to give me trustworthy information.

The FCC should enter rulemaking to expand its authority for regulating peer-to-peer (P2P) communications to explicitly encompass interactive AI systems. And Congress should pass enabling legislation to back it up, giving it authority to act not only on the SMS text messaging platform, but also over the wider Internet, where AI chatbots can be accessed over the web and through apps.

And the media has a role. We can still rely on the media to report out what videos, images, and audio recordings are real or fake. Perhaps deepfake technology makes it impossible to verify the truth of what is said in private conversations, but this was always unstable territory.

What is your role? Those who share these concerns can submit a comment to the FEC’s open public comment process before October 16, urging it to use its available authority. We all know government moves slowly, but a show of public interest is necessary to get the wheels moving.

Ultimately, all these policy changes serve the purpose of looking beyond the shiny distraction of AI to create the authority to counter bad behavior by humans. Remember: behind every AI is a human who should be held accountable.

More from this Program

‘Both parties now can claim the mantle of a multiracial electorate’
Harvard faculty and fellows sit in front of a classroom of students.

Feature

‘Both parties now can claim the mantle of a multiracial electorate’

From global election trends to inflation anger, swing state performance, and failed voting reform initiatives, Harvard election law experts break down last week’s presidential election and what it might mean for the future of American democracy.

Election 2024: Appreciating The Front-Line Workers of Democracy

Commentary

Election 2024: Appreciating The Front-Line Workers of Democracy

As the dust settles from the U.S. presidential election, the American public can celebrate that the election process was largely nonviolent and smooth. However, it is important that the public not be lulled into thinking this signals the end of election administrators’ problems.

More on this Issue

Political Economy of AI Essay Collection

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

Watching the Generative AI Hype Bubble Deflate

Additional Resource

Watching the Generative AI Hype Bubble Deflate

As a part of the Allen Lab’s Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.

AI and Practicing Democracy

Additional Resource

AI and Practicing Democracy

As a part of the Allen Lab’s Political Economy of AI Essay Collection, Emily S Lin and Marshall Ganz call on us to reckon with how humans create, exercise, and structure power, in hopes of meeting our current technological moment in a way that aligns with our values.