Feature  

AI and the 2024 Elections

From misinformation to AI panic, experts joined the Allen Lab’s GETTING-Plurality event to discuss the threats the burgeoning technology poses to democracy.

A futuristic sign is covered in circuits and reads
This image was created by AI via Adobe Firefly

In 2024, a record number of voters will head to the polls. Elections are scheduled to take place in at least 64 countries, collectively representing approximately 49% of the global population. And this year, there’s a noteworthy new development: emerging AI technologies, which have the potential to impact electoral processes and outcomes in a variety of ways.

“Many polls have come out recently showing just how high the anxiety is in the general public about the impact of artificial intelligence on our elections,” said Danielle Allen during a panel session co-hosted by the GETTING-Plurality Research Network a project of the Ash Center’s Allen Lab for Democracy Renovation at HKS, and Connection Science at MIT Media Lab. Participants discussed how to leverage AI’s potential to bolster democratic engagement and strengthen election integrity while addressing the technology’s adverse effects.

The risk of misinformation

Allen emphasized that new technologies have been leading to misinformation for some time. “We now live in a world where the capacity to generate misinformation and disinformation absolutely swamps the capacity of fact checkers,” she said. This environment will lead to “a lot of stress tests of our electoral systems all across the globe” in the months ahead.

In response to these challenges, Allen emphasized the urgency of addressing misinformation and disinformation and advocated for improved incentive structures in the electoral system. She outlined three key steps to protect the capacity to process information during elections: identifying trusted sources early, guarding against deepfakes by presuming media is fictitious, and prioritizing education to build digital competence in the larger population.

We now live in a world where the capacity to generate misinformation and disinformation absolutely swamps the capacity of fact checkers. Headshot of Danielle Allen

Danielle Allen

James Bryant Conant University Professor

Potential to dissuade voters

Allen also noted that much of AI’s potential political power lies in campaign strategies that aim to disincentivize people from voting. In response, she urged instituting universal voting and transitioning from plurality voting to ranked-choice voting. “We can think really hard about our electoral system and make choices for our electoral system that incentivize positive behavior from candidates.”

Threat to election infrastructure, AI panic

Nate Persily, Professor of Law at Stanford Law School and former Senior Research Director of the Presidential Commission on Election Administration, described the increased vulnerability of election infrastructure due to AI and other trends. “The basic rule for AI and democracy is that it amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had,” he summarized.

Persily expressed greater concern about public perception in 2024 than AI itself. “AI panic is itself a democracy problem,” he argued, as the actual prevalence of deepfakes is minimal but can be greatly expanded by the media. Therefore, the issue with AI and democracy isn’t just the risk of believing false information but the erosion of trust in authentic content. “But in the end, look, we’ve got paleolithic emotions, medieval institutions, and God-like technology,” he said. “And so, this technology that we’ve developed is going to have effects on our democracy. But I think it’s of less significance than the sociological factors that are really causing some problems in both U.S. democracy and around the world.”

Balancing privacy and protection

Sandy Pentland, Professor of Media Arts and Sciences at MIT, focused on the foundational role of identity and reputation in mitigating online threats and establishing trust. Both Allen and Pentland referenced Taiwan as a model for balancing privacy while protecting against disinformation and online crime. There, users are anonymous on digital media but verified as actual humans. Pentland noted that even crypto exchanges now require identification, which is then kept confidential.

“And so, what we have to do is we have to think about, ‘Can we do that in media?’” Pentland asked. “And the answer is pretty [much] yes. We have most of the infrastructure there already to do it.” He contended that the mechanisms used in Taiwan and in crypto exchanges offer a way to understand whom one interacts with without compromising privacy. “I would suggest that we have this sort of fairly radical principle: a complete anonymity in opposition to the ability to track down bad guys and have some sort of knowledge of who it is that we’re dealing with.”

Watch the Event Recording


More from this Program

Political Economy of AI Essay Collection

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

Watching the Generative AI Hype Bubble Deflate

Additional Resource

Watching the Generative AI Hype Bubble Deflate

As a part of the Allen Lab’s Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.

AI and Practicing Democracy

Additional Resource

AI and Practicing Democracy

As a part of the Allen Lab’s Political Economy of AI Essay Collection, Emily S Lin and Marshall Ganz call on us to reckon with how humans create, exercise, and structure power, in hopes of meeting our current technological moment in a way that aligns with our values.

More on this Issue

Political Economy of AI Essay Collection

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

Watching the Generative AI Hype Bubble Deflate

Additional Resource

Watching the Generative AI Hype Bubble Deflate

As a part of the Allen Lab’s Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.

AI and Practicing Democracy

Additional Resource

AI and Practicing Democracy

As a part of the Allen Lab’s Political Economy of AI Essay Collection, Emily S Lin and Marshall Ganz call on us to reckon with how humans create, exercise, and structure power, in hopes of meeting our current technological moment in a way that aligns with our values.