Feature  

It’s not too late to reimagine AI’s role in the world

Computer scientist turned social scientist Ashley Lee discusses how policymakers and technologists alike can change the way AI is used — or not used — across the globe.

Photo of Ashley Lee standing in front of a couch and a book case

On a fall day in 2017, in the run-up to a national election, Facebook (now Meta) users in Cambodia opened their mobile phones and web browsers to find a newly revised homepage for the social media platform. Overnight and without their consent, Facebook launched an AI-driven experiment that modified users’ feeds to feature more personalized content from Facebook friends, moving news updates and stories to a new “Explore Feed” tab. While most users barely noticed the change, some activists, independent news outlets, and NGOs, who served as the rare voices of political dissent in Cambodia, reported seeing an immediate drop in traffic, according to Ashley Lee, a scholar of technology, politics, and social movements and a visiting democracy fellow at the Ash Center.

In Cambodia, Lee had been working closely with activists and civil society actors, first as a human rights worker with the United Nations and then as a digital ethnographer conducting fieldwork. Her time there coincided with the country’s national elections, where opposition activists hoped to mount a serious electoral challenge to Cambodia’s longtime incumbent prime minister, Hun Sen, who had managed to maintain his grip on power for over three decades. After maneuvering Cambodia’s Supreme Court to effectively dissolve the opposition Cambodia National Rescue Party, Hun Sen’s ruling Cambodian People’s Party went on to win all 125 seats in the country’s parliament, with many human rights observers lamenting the effective end of democracy in the country.

Lee — who had watched in real-time as Facebook’s changes that she believed worsened censorship and impacted citizens’ access to news — became increasingly concerned about how technology had been co-opted as an accomplice in the Cambodian government’s anti-democratic crackdown. Her experience in Cambodia led Lee to apply many of the insights she had gained through years of building technologies as a computer scientist to study the politics of AI and algorithms as a social scientist.

Facebook’s experiment in Cambodia is an example of what Lee terms “algorithmic violence,” a new type of violence that she describes as, “stem[ming] from inequities created by an assemblage of algorithms and related practices, norms, and institutions.” Lee’s example of algorithmic violence she witnessed in Cambodia highlighted how some communities — especially those in developing countries — are often not included in the development and design of digital political infrastructure to devastating effect.

Lee’s work soon came to center around a series of questions: Who gets to design the algorithms at play in everyday life? Who sets this agenda? And how do we attract and retain technologists and students who come from diverse backgrounds and bring different experiences, values, and worldviews to this work?

“When thinking about algorithms, these codes driving commercial digital platforms are designed by engineers, mostly in the Global North, and [often] get adapted by young people and other users in the Global South,” says Lee. Algorithmic design flaws, she argues, perpetuate biases and introduce other issues that can also undermine the reach of independent media or political opposition figures. And now with the advent of AI, the impact that a narrow set of engineers working for Western tech companies can have on global systems is even greater.

Photo of Ashley Lee standing in front of a couch and a book case
“Algorithmic violence can impact digital political infrastructures of entire communities, societies, and populations,” says Ashley Lee, Ash Center Democracy Visiting Fellow.

Unleashing what she views as skewed AI-driven systems across developing countries ventures dangerously close to what she considers “digital colonialism” Lee argues. “The ability to control digital political infrastructure like AI systems represents a new form of political power. But when the analysis of [AI-driven] algorithmic violence remains purely at the level of the individual, the social production of society-wide suffering — political or otherwise — is overlooked,” she explains. “Algorithmic violence can impact digital political infrastructures of entire communities, societies, and populations.”

After completing her doctorate at Harvard, Lee wanted to deepen her cross-disciplinary research experience to better understand the landscape of issues at play with AI and algorithmic violence. She also wanted to continue her work developing solutions to what she saw as the unchecked threat of algorithmic violence in countries such as Cambodia. The Ash Center’s Democracy Fellowship Program was an ideal choice for Lee, especially given center director Archon Fung’s work to bring together scholars from a variety of disciplines and institutions to examine contemporary research and other trends in democracy – all spanning multiple scholarly fields.

“I really appreciated the unique environment that the Ash Center — and especially the seminar led by Archon Fung — provided for me as an interdisciplinary scholar who is trying to combine disparate fields of inquiry to tackle these nascent questions around AI and politics,” she says. “For me, these democratic and inclusive conversations set new standards for what rigorous cross-disciplinary scholarship can aspire to.”

During her time at the Ash Center, Lee’s engagement with scholars across the center has helped her think in new directions about how to better deploy democratic methods to ethically manage algorithmic risk and AI. Lee emphasizes that these methods should empower the next generation with the tools to adopt a participatory (rather than prescriptive) approach to engaging communities that have been traditionally sidelined from debates about technology to level the field of global inequalities.

“To be able to identify and address [algorithmic violence], we need to be able to look beyond the code and think about the interactions between different components like legal systems, the structure of global economies, and the ways people use different technologies,” Lee says. “That goes back to the importance of looking at global power inequities and how those structures interact and create differential impacts on marginalized communities who are often at the frontlines of technological innovation and deployment.”

More on this Issue

AI and the 2024 Elections

Video

AI and the 2024 Elections

The GETTING-Plurality Research Network at the Ash Center’s Allen Lab and Connection Science at MIT Media Lab hosted a webinar event focused on “AI and the 2024 Elections”. In this session, we hear from Danielle Allen, Harvard University; Sandy Pentland, Massachusetts Institute of Technology; and Nate Persily, Stanford University. Each presenter gives a lightning talk, followed by audience Q&A.

Terra Incognita: The Governance of Artificial Intelligence in Global Perspective

Additional Resource

Terra Incognita: The Governance of Artificial Intelligence in Global Perspective

GETTING-Plurality Research Network members Allison Stanger and Woojin Lim, along with other authors, published “Terra Incognita: The Governance of Artificial Intelligence in Global Perspective” in the Annual Review of Political Science.

 

AI and the Future of Privacy

Video

AI and the Future of Privacy

The GETTING-Plurality Research Network at the Ash Center’s Allen Lab and Connection Science at MIT Media Lab hosted a webinar event focused on “AI and the Future of Privacy”. In this session, we hear from Bruce Schneier, security technologist, and Faculty Affiliate at the Ash Center; Sarah Roth-Gaudette, Executive Director of Fight for the Future; and Tobin South, MIT Ph.D. Candidate and Fulbright Scholar. Each presenter gives a lightning talk, followed by audience Q&A.