Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

This introduction was written by Allen Lab Senior Fellow Alex Pascal.

Earlier this year, the Allen Lab for Democracy Renovation at the Ash Center convened a multidisciplinary conference on the political economy of artificial intelligence (AI). The event brought together scholars and practitioners from the worlds of policy, industry and academia to examine contemporary paradigms for and trajectory of AI development. Participants focused on how the current political economy of AI privileges powerful actors over ordinary people, enables and condones monopolistic practices, and grants significant political and social influence to profit-seeking enterprises rather than incentivizing the provision of public goods through AI. Ultimately, the event sought to produce a research agenda that brings critical approaches to political economy to questions about power, technology, and democracy in the context of rapid AI development.

Several participants wrote short essays building on the discussions at the conference. These wide-ranging essays showcase the problems of our existing AI development and deployment ecosystem and advance new ideas for pursuing a path for AI in the public interest that better serves all people and communities.

 

 

AI and Practicing Democracy

In their essay, Emily S Lin and Marshall Ganz pose the striking question: “As the makers of AI, how will we keep AI from unmaking us?” They call on us to reckon with how humans create, exercise, and structure power, in hopes of meeting our current technological moment in a way that aligns with our values. They argue that we have lost sight of centering human agency, “a failure of our political economy that is reflected in the crisis of democracy in which we currently find ourselves.” To reclaim human agency in the development of artificial intelligence and bend AI toward the public interest, the authors suggest turning to organizing, a means of practicing democracy in which people turn the resources they have into the power they need to get the change they want. Organizing, they say, offers a people-centered lens through which to address AI’s profound social, political, economic, and moral questions. This lens shifts us away from a more technical, exclusive, and ultimately dehumanizing debate limited to data, compute, programming, and profit, and refocuses the conversation on a familiar and deeply human struggle to engage power not as an end itself, but as a means of realizing our values.

Read more

Watching the Generative AI Hype Bubble Deflate

David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the deleterious effects of the current generative AI bubble. They argue that companies, investors, and the media have worked to convince the public of generative AI’s inevitability and the imperative of mass, rapid adoption — and even adaptation — to boost their profits and stock prices, and create “path dependence” in society for this new “advanced” infrastructure. Even as this self-interested hype fades, Widder and Hicks warn that its negative impacts on the environment, workers, and our information environment will persist. Generative AI is not inevitable, they assert, but “today’s hype will have lasting effects that constrain tomorrow's possibilities” for all of us.

Read more

Medicare Advantage as Asset Management: The Pretense of Care Under Logics of Extraction

In his essay on the political economy of health care, Ajeet Singh uses the case of American health insurance to show how for-profit companies with commercial incentives are developing and deploying AI to maximize revenue from patients and the government rather than improving the delivery of care and making people healthier. In fact, whereas healthcare is often cited as the premier sector for AI to practically improve people’s lives, the deployment of AI thus far has in many cases prevented and worsened care for patients. 

Read more

Cooperative Paradigms for Artificial Intelligence

In her essay, Sarah Hubbard, noting the risks of concentrating AI development into monopoly powers, underscores the “need to explore alternative ownership and governance structures that would better serve the public interest.” She argues that cooperative approaches for AI development and governance could counter the dominance of large technology corporations and ease concerns among consumers, companies, and regulators. Invoking the American tradition of cooperative ownership in different industries, she suggests that cooperative structures (“co-ops”) for AI at different layers of the tech stack (such as cloud and data cooperatives as well as collective governance of AI systems) can play a beneficial role in the future development and governance of AI.

Read more

AI, Digital Sovereignty, and the EU’s Path Forward: A Case for Mission-Oriented Industrial Policy

Tessel van Oirsouw, in her essay, investigates the European Union’s predicament in pursuing “digital sovereignty.” On the one hand, the EU has passed significant legislation to redress and curtail the extractive, exploitative, and harmful practices and monopoly power of big tech companies and AI behemoths. On the other hand, the EU is struggling to achieve the “digital sovereignty” it seeks for itself alongside other policy priorities, such as climate neutrality. Van Oirsouw argues for the EU to pursue a “mission-oriented industrial policy” centered on ambitious but specific cross-sector policy goals, focused on concrete real-world outcomes that benefit citizens. This approach, she says, would purposefully consolidate the EU’s strategic agenda, advancing digital sovereignty and stimulating technological innovation according to the EU’s strategic goals rather than technological innovation as a goal in and of itself.

Read more

Taken together, these essays are sobering. They offer cause for caution. They are clear-eyed about the world we are headed toward if AI continues to be developed and deployed in a political economy that privileges the profits of investors and shareholders over solving the problems of ordinary people: a world dominated by monopoly power and tolerant of exploitation; a world that neglects of harms to individual people and communities and fails to provide public goods and meet public needs; a world that erodes the practice and culture of democracy and human agency. And yet, these essays are also hopeful. The AI era doesn’t have to be this way, and it shouldn’t be. The authors point toward opportunities and new ways of developing and governing AI, making policy, and mobilizing power to bend the trajectory of artificial intelligence toward serving the people.

More from the Political Economy of AI Conference

Conference on the Political Economy of AI

Feature

Conference on the Political Economy of AI

Experts gathered at the Allen Lab conference to examine the incentives and structures of AI development, as well as to discuss the past, present, and potential future of steering AI towards better serving the public interest.

GETTING-Plurality Conference on the Political Economy of Artificial Intelligence

Video

GETTING-Plurality Conference on the Political Economy of Artificial Intelligence

The Political Economy of AI Conference was convened by the GETTING-Plurality Research Network, a project of the Allen Lab for Democracy Renovation, housed at the Harvard Kennedy School’s Ash Center for Democratic Governance and Innovation.

Conference on the Political Economy of AI Podcast Episodes

Podcast

Conference on the Political Economy of AI Podcast Episodes

Check out the podcast episodes from the Allen Lab for Democracy Renovation’s Conference on the Political Economy of AI to glean insights from each panel.