Video
Feature
AI and the 2024 Elections
From misinformation to AI panic, experts joined the Allen Lab’s GETTING-Plurality event to discuss the threats the burgeoning technology poses to democracy.
In 2024, a record number of voters will head to the polls. Elections are scheduled to take place in at least 64 countries, collectively representing approximately 49% of the global population. And this year, there’s a noteworthy new development: emerging AI technologies, which have the potential to impact electoral processes and outcomes in a variety of ways.
“Many polls have come out recently showing just how high the anxiety is in the general public about the impact of artificial intelligence on our elections,” said Danielle Allen during a panel session co-hosted by the GETTING-Plurality Research Network a project of the Ash Center’s Allen Lab for Democracy Renovation at HKS, and Connection Science at MIT Media Lab. Participants discussed how to leverage AI’s potential to bolster democratic engagement and strengthen election integrity while addressing the technology’s adverse effects.
The risk of misinformation
Allen emphasized that new technologies have been leading to misinformation for some time. “We now live in a world where the capacity to generate misinformation and disinformation absolutely swamps the capacity of fact checkers,” she said. This environment will lead to “a lot of stress tests of our electoral systems all across the globe” in the months ahead.
In response to these challenges, Allen emphasized the urgency of addressing misinformation and disinformation and advocated for improved incentive structures in the electoral system. She outlined three key steps to protect the capacity to process information during elections: identifying trusted sources early, guarding against deepfakes by presuming media is fictitious, and prioritizing education to build digital competence in the larger population.
We now live in a world where the capacity to generate misinformation and disinformation absolutely swamps the capacity of fact checkers.Danielle Allen
James Bryant Conant University Professor
Potential to dissuade voters
Allen also noted that much of AI’s potential political power lies in campaign strategies that aim to disincentivize people from voting. In response, she urged instituting universal voting and transitioning from plurality voting to ranked-choice voting. “We can think really hard about our electoral system and make choices for our electoral system that incentivize positive behavior from candidates.”
Threat to election infrastructure, AI panic
Nate Persily, Professor of Law at Stanford Law School and former Senior Research Director of the Presidential Commission on Election Administration, described the increased vulnerability of election infrastructure due to AI and other trends. “The basic rule for AI and democracy is that it amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had,” he summarized.
Persily expressed greater concern about public perception in 2024 than AI itself. “AI panic is itself a democracy problem,” he argued, as the actual prevalence of deepfakes is minimal but can be greatly expanded by the media. Therefore, the issue with AI and democracy isn’t just the risk of believing false information but the erosion of trust in authentic content. “But in the end, look, we’ve got paleolithic emotions, medieval institutions, and God-like technology,” he said. “And so, this technology that we’ve developed is going to have effects on our democracy. But I think it’s of less significance than the sociological factors that are really causing some problems in both U.S. democracy and around the world.”
Balancing privacy and protection
Sandy Pentland, Professor of Media Arts and Sciences at MIT, focused on the foundational role of identity and reputation in mitigating online threats and establishing trust. Both Allen and Pentland referenced Taiwan as a model for balancing privacy while protecting against disinformation and online crime. There, users are anonymous on digital media but verified as actual humans. Pentland noted that even crypto exchanges now require identification, which is then kept confidential.
“And so, what we have to do is we have to think about, ‘Can we do that in media?’” Pentland asked. “And the answer is pretty [much] yes. We have most of the infrastructure there already to do it.” He contended that the mechanisms used in Taiwan and in crypto exchanges offer a way to understand whom one interacts with without compromising privacy. “I would suggest that we have this sort of fairly radical principle: a complete anonymity in opposition to the ability to track down bad guys and have some sort of knowledge of who it is that we’re dealing with.”
Watch the Event Recording
More from this Program
Commentary
Sunset and Renew: Section 230 Should Protect Human Speech, Not Algorithmic Virality
Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, advocate for a “repeal and renew” approach to Section 230 in an effort to reform the current social media ecosystem.
Video
Can Higher Education Help Renovate American Democracy?
The Allen Lab for Democracy Renovation hosted a webinar with several panelists to discuss a host of new campus initiatives that offer promising pathways for higher education to reassert its vital role in strengthening democracy by engaging students’ civic learning and supporting their development as civic actors.
Commentary
Tech Policy that (Actually) Serves the People
Allen Lab for Democracy Renovation Fellow Ami Fields-Meyer lays out research questions for developing a new U.S. tech policy agenda that puts people first.
Commentary
Sunset and Renew: Section 230 Should Protect Human Speech, Not Algorithmic Virality
Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, advocate for a “repeal and renew” approach to Section 230 in an effort to reform the current social media ecosystem.
Commentary
Tech Policy that (Actually) Serves the People
Allen Lab for Democracy Renovation Fellow Ami Fields-Meyer lays out research questions for developing a new U.S. tech policy agenda that puts people first.
Occasional Paper
The National Security Case for Public AI
Allen Lab for Democracy Renovation Fellow Alex Pascal and Vanderbilt Law Professor Ganesh Sitaraman make the case that public options for AI and public utility-style regulation of AI will enhance national security by ensuring innovation and competition, preventing abuses of power and conflicts of interest, and advancing public interest and national security goals.