It’s not too late to reimagine AI’s role in the world
Computer scientist turned social scientist Ashley Lee discusses how policymakers and technologists alike can change the way AI is used — or not used — across the globe.
On a fall day in 2017, in the run-up to a national election, Facebook (now Meta) users in Cambodia opened their mobile phones and web browsers to find a newly revised homepage for the social media platform. Overnight and without their consent, Facebook launched an AI-driven experiment that modified users’ feeds to feature more personalized content from Facebook friends, moving news updates and stories to a new “Explore Feed” tab. While most users barely noticed the change, some activists, independent news outlets, and NGOs, who served as the rare voices of political dissent in Cambodia, reported seeing an immediate drop in traffic, according to Ashley Lee, a scholar of technology, politics, and social movements and a visiting democracy fellow at the Ash Center.
In Cambodia, Lee had been working closely with activists and civil society actors, first as a human rights worker with the United Nations and then as a digital ethnographer conducting fieldwork. Her time there coincided with the country’s national elections, where opposition activists hoped to mount a serious electoral challenge to Cambodia’s longtime incumbent prime minister, Hun Sen, who had managed to maintain his grip on power for over three decades. After maneuvering Cambodia’s Supreme Court to effectively dissolve the opposition Cambodia National Rescue Party, Hun Sen’s ruling Cambodian People’s Party went on to win all 125 seats in the country’s parliament, with many human rights observers lamenting the effective end of democracy in the country.
Lee — who had watched in real-time as Facebook’s changes that she believed worsened censorship and impacted citizens’ access to news — became increasingly concerned about how technology had been co-opted as an accomplice in the Cambodian government’s anti-democratic crackdown. Her experience in Cambodia led Lee to apply many of the insights she had gained through years of building technologies as a computer scientist to study the politics of AI and algorithms as a social scientist.
Facebook’s experiment in Cambodia is an example of what Lee terms “algorithmic violence,” a new type of violence that she describes as, “stem[ming] from inequities created by an assemblage of algorithms and related practices, norms, and institutions.” Lee’s example of algorithmic violence she witnessed in Cambodia highlighted how some communities — especially those in developing countries — are often not included in the development and design of digital political infrastructure to devastating effect.
Lee’s work soon came to center around a series of questions: Who gets to design the algorithms at play in everyday life? Who sets this agenda? And how do we attract and retain technologists and students who come from diverse backgrounds and bring different experiences, values, and worldviews to this work?
“When thinking about algorithms, these codes driving commercial digital platforms are designed by engineers, mostly in the Global North, and [often] get adapted by young people and other users in the Global South,” says Lee. Algorithmic design flaws, she argues, perpetuate biases and introduce other issues that can also undermine the reach of independent media or political opposition figures. And now with the advent of AI, the impact that a narrow set of engineers working for Western tech companies can have on global systems is even greater.
Unleashing what she views as skewed AI-driven systems across developing countries ventures dangerously close to what she considers “digital colonialism” Lee argues. “The ability to control digital political infrastructure like AI systems represents a new form of political power. But when the analysis of [AI-driven] algorithmic violence remains purely at the level of the individual, the social production of society-wide suffering — political or otherwise — is overlooked,” she explains. “Algorithmic violence can impact digital political infrastructures of entire communities, societies, and populations.”
After completing her doctorate at Harvard, Lee wanted to deepen her cross-disciplinary research experience to better understand the landscape of issues at play with AI and algorithmic violence. She also wanted to continue her work developing solutions to what she saw as the unchecked threat of algorithmic violence in countries such as Cambodia. The Ash Center’s Democracy Fellowship Program was an ideal choice for Lee, especially given center director Archon Fung’s work to bring together scholars from a variety of disciplines and institutions to examine contemporary research and other trends in democracy – all spanning multiple scholarly fields.
“I really appreciated the unique environment that the Ash Center — and especially the seminar led by Archon Fung — provided for me as an interdisciplinary scholar who is trying to combine disparate fields of inquiry to tackle these nascent questions around AI and politics,” she says. “For me, these democratic and inclusive conversations set new standards for what rigorous cross-disciplinary scholarship can aspire to.”
During her time at the Ash Center, Lee’s engagement with scholars across the center has helped her think in new directions about how to better deploy democratic methods to ethically manage algorithmic risk and AI. Lee emphasizes that these methods should empower the next generation with the tools to adopt a participatory (rather than prescriptive) approach to engaging communities that have been traditionally sidelined from debates about technology to level the field of global inequalities.
“To be able to identify and address [algorithmic violence], we need to be able to look beyond the code and think about the interactions between different components like legal systems, the structure of global economies, and the ways people use different technologies,” Lee says. “That goes back to the importance of looking at global power inequities and how those structures interact and create differential impacts on marginalized communities who are often at the frontlines of technological innovation and deployment.”
Sunset and Renew: Section 230 Should Protect Human Speech, Not Algorithmic Virality
Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, advocate for a “repeal and renew” approach to Section 230 in an effort to reform the current social media ecosystem.
Allen Lab for Democracy Renovation Fellow Ami Fields-Meyer lays out research questions for developing a new U.S. tech policy agenda that puts people first.
Allen Lab for Democracy Renovation Fellow Alex Pascal and Vanderbilt Law Professor Ganesh Sitaraman make the case that public options for AI and public utility-style regulation of AI will enhance national security by ensuring innovation and competition, preventing abuses of power and conflicts of interest, and advancing public interest and national security goals.
Register to vote today
Civic participation is central to Harvard’s mission of developing citizen leaders. Sign up for election reminders, and get help with voter registration and early voting — all in one place.