Additional Resource  

AI and Practicing Democracy

As a part of the Allen Lab’s Political Economy of AI Essay Collection, Emily S Lin and Marshall Ganz call on us to reckon with how humans create, exercise, and structure power, in hopes of meeting our current technological moment in a way that aligns with our values.

“So much has been done, explained the soul of Frankenstein — more, far more, will I achieve … I will pioneer a new way, explore unknown powers, and unfold to the world the deepest mysteries of creation.”

— Victor Frankenstein in Mary Shelley’s “Frankenstein”

 

“Yet you, my creator, detest and spurn me, thy creature, to whom thou art bound by ties only dissoluble by the annihilation of one of us.”

— Frankenstein’s monster in Mary Shelley’s “Frankenstein”

Encountering new technologies — whether actual or imagined — can raise existential questions for humans. While some dismiss such concerns as “moral panic,” the enduring resonance of stories ranging from Prometheus to Frankenstein to Oppenheimer reminds us that at the heart of every existential technological question is a very personal, very human moral struggle.

Today’s version of this question goes something like, “As the makers of AI, how will we keep AI from unmaking us?” At the heart of it is a reckoning with how humans create, exercise, and structure power, in hopes of meeting our current technological moment in a way that realizes, not undermines, our values.

Some responses come in the form of aspirations toward “responsible AI” or “ethical AI.” The problem with terms like these is that they locate agency in the wrong place, positioning AI as the subject – a subject that could and perhaps should be nudged in some way, but an agentic subject nonetheless. This framing reveals a persistent failure to recognize that while machines can have power, agency — the choice of how to create and direct that power — is the domain of humans.

Makers, users, and regulators of AI systems are not unaware of this blind spot. Computer scientist and tech ethicist Jaron Lanier encourages us not to engage AI as an “alien intelligence,”1 a god to be worshipped or mollified, or even a child to be developed.2 Instead, he reminds us that AI is a tool that humans are entirely capable of, and indeed responsible for, shaping and using. As early as 2016, Microsoft CEO Satya Nadella argued against debating whether AI was itself good or evil, writing, “The debate should be about the values instilled in the people and institutions creating this technology.”3

In this essay, we extend the field of inquiry beyond the values “instilled” in people and institutions to explore how such values are collectively cultivated, negotiated, and made real through our choices and actions – what we call practicing democracy. We suggest that, at both micro and macro levels, we have forgotten how to center human agency, a failure of our political economy that mistakes means for ends, and that is reflected in today’s crisis of democracy

In this essay, our fundamental question is: “How can we, as people, use our resources to create the capacity to realize not just our goals and objectives but our values?” We argue that if we want to engage AI as a means toward human ends, we need to understand how people build, shape, and exercise power to create change aligned with their values. In other words, we need to understand organizing.

Organizing: People, Power, Change

Organizing is how we equip people with the capacity to work with others to build the power they need to get the change they seek.4 As organizers, we always start by asking, “Who are my people?” Second, we ask, “What change do my people want?” And third, we ask, “How can we work together to turn the resources we have into the power we need to win that change?”

It is this third question that enables people not just to be subjected to change but to be agents in creating it. The 12th-century philosopher Maimonides said that hope is belief in the plausibility of the possible, as opposed to the necessity of the probable. In a world in which AI and its algorithms nudge us toward using the probable to infer the necessary, organizing reorients us to the possibility that lives within all people, offering plausible examples of how such possibility can be turned into the power we need to get the change we want.

Photo by Unseen Histories, Unsplash

In this way, organizing is a framework for how humans can engage with power as a natural and essential phenomenon that is deeply tied to the relationship between human agency and human values. Martin Luther King Jr. famously stated, “Power without love is reckless and abusive, and love without power is sentimental and anemic. Power at its best is love implementing the demands of justice, and justice at its best is [power] correcting everything that stands against love.”5

So power is not just good or bad, nor is it an isolated “thing” that you either have or don’t. We frame power as an influence created through the terms of our interdependence with others, based on a balance of needs and resources negotiated in the context of human value and values.

Organizers value “people power” — influence that is grown and exercised through people’s commitment and shared practice in real relationship with each other. We teach organizing through five key practices that are rooted in basic human competencies: building relationships, storytelling, strategizing, acting, and structuring. And we do all this through a cascading model of leadership that we call a “pedagogy of practice,” in which we enable learners to become leaders, who in turn teach other learners to lead.6 Through this, people grow their power by continuously honing and sharing their craft.

The power to govern is not always rooted in people. For example, concentrations of money or access to perceived divinity can also be sources of governing power. These concentrations often stem from the belief that freedom and equality are in tension — if I want to maximize my freedom to meet my needs, I need to concentrate resources for myself.

In contrast, philosopher Elizabeth Anderson argues that, in an interdependent human society, freedom and equality are themselves interdependent. Only in an equal society, she contends, can people truly be free. The promise of democracy is that by rooting power in the people, the “demos,” we can not only exercise but hold accountable governing power, enabling us to live together in dignity, equality, solidarity, security, justice, and hope, respecting the equal, infinite, and inherent worth of each person — what Danielle Allen calls “human flourishing.”7

In a world in which AI [nudges] us toward using the probable to infer the necessary, organizing reorients us to the possibility that lives within all people, offering plausible examples of how such possibility can be turned into the power we need to get the change we want.

Emily S Lin and Marshall Ganz

But who are “the people?” And who decides? This is where organizing comes in. As Alexis de Tocqueville observed in the nineteenth century, “knowledge of how to combine” is the “mother of all forms of knowledge.”8 It is the opportunity, experience, and practice of learning to come together to discern common interests, committing to one another in pursuit of a shared purpose rooted in collective values, that enables us to identify and define who “our people” are. We see organizing as the key to practicing democracy because it transcends narrow self-interest — not by aggregating individual preferences but through shared practices that build trust and enable collective action. These shared practices enable people to exercise their agency through the discernment, structuring, and exercising of power that roots the balance of power not in governing authorities, but in the people who authorize them to govern.9

At its core, then, organizing is an intentional engagement with the inherent interdependence of life, enabling individual and collective human agency to build and exercise power to “[correct] everything that stands against love.”

What Does This Have to Do with AI?

Economist Daron Acemoglu has convincingly argued that while AI technologies could be used in a way that enables democratic practice, they are currently being used in ways that exacerbate existing concentrations of power, for example, by deepening economic inequality. Observing that current use is primarily “driven by cost savings/productivity improvements at the task level,”10 Acemoglu highlights how we are repeating this familiar human error of confusing technological means with human ends.

To re-center human agency and focus our perspective on AI through a lens of practicing democracy, we suggest three main entry points, proposed as downstream, upstream, and “dam” approaches.

Downstream Approach

A “downstream” approach focuses on how to use AI in a way that enhances democratic practice. For example, we have experimented with leveraging AI-assisted conversation analysis to coach leadership team meetings. Metrics of conversational share and turn-taking, can reveal, for example, that the supposed team “coordinator” is doing 80% of the talking, dominating more than leading. While conversation analysis is not unique to our context, the “heart challenge” of learning to intentionally engage tension and discomfort around power dynamics is a key part of learning to organize. We have found that, with the right pedagogical approach, data can productively scaffold engagement with such challenges, for example, enabling the coach to shift focus from the coordinator alone to engaging the team in collective reflection on how their interactions align with their agreed-upon norms. This is an example of what we call“evidence-based organizing,” a kind of AI-boosted “game tape” process, similar to how athletes use AI-assisted recording analysis to improve their performance. In the downstream approach, the key is to identify where AI can enhance existing craft, not replace it.

Upstream Approach

An “upstream” approach, in contrast, is primarily concerned with how to build AI. Training AI models to differentiate between “correct” and “incorrect” guesses is a crucial form of human influence, but how can we do this if we haven’t defined “correct” and “incorrect” from a democratic practice perspective? In research collaborations with the MIT Center for Constructive Communication and Elizabeth McKenna at the Civic Power Lab, we are exploring how to define the right metrics that differentiate between mobilizing, organizing, and governing power, and between values-based relationship-building and issues-based relational transactions.

Additionally, upstream approaches help us evaluate the potential limitations of AI models. Our research also explores whether large language models (LLMs) can be “taught” to recognize effective public narratives — a deeply emotional and relational leadership practice. The goal is not to replace human storytellers with machines but to recognize that algorithms have shaped our attention and behavior for years. This work asks how we might design algorithms that can recognize and prioritize values-based communication over marketing-oriented messaging, or whether LLMs might be missing something essential about human connection altogether.

Dam Approach

Photo by the Ash Center, Political Economy of AI Conference

Finally, the “dam” approach is primarily concerned with figuring out the structures needed to regulate AI. Acemoglu suggests regulation as a means to mitigate how AI is exacerbating unequal concentrations of power. While few people are yet experts on AI regulation, that does not mean we should cede the ground of regulation. In a recent Ash Center convening on AI and political economy, Amba Kak of AI Now suggested that the key to regulating AI right now is not to define the perfect guardrails, but to “build a movement.”11 Movements do not legislate, but they do influence legislation and have driven every major societal shift toward justice.

Yet, it is a mistake to understand the value of movements simply as a means of passing policy. People organize not just to move some coherent agenda forward. It is through organizing that we generate the democratic agenda, and in fact articulate, negotiate, and create the demos itself. To prioritize democratic deliberation and practice, then, is to prioritize the creation of venues in which people can meaningfully experience what it means not only to consume, not only to participate, but to govern — engaging in the process of discerning the common interest and exercising collective ownership and accountability. Recent successful union efforts, like the Writers Guild of America’s (WGA) contract negotiations, are useful not just for protecting human labor from exploitation by the owners of AI technologies but also for giving union members experience in exercising their collective democratic muscles to build and wield collective power.

In the same convening mentioned above, Julius Krein critiqued claims of AI’s transformative nature as overblown, given that the financial benefits are still concentrated in the same few large companies which profit from ownership of intellectual property and associated rent-seeking practices.12 Elizabeth Anderson’s book Private Government illustrates how these dynamics have negative consequences not only at the macro-economic level, but in the daily experiences of human beings who now find ourselves living most of our lives owning nothing and deciding nothing, ruled by privately owned, authoritarian bureaucracies: our employers.13 Ultimately, then, enhancing opportunities for democratic practice in AI regulation, as in the WGA example, might be the entry point at which organizing could exert the most powerful influence toward promoting human agency and human ends. By creating new venues through which we can re-learn how to practice democracy, organizing enables us to influence not only policies but also the processes by which those policies are created, implemented, evaluated, learned from, and then re-imagined.

Organizing offers a framework for practicing democracy that grapples with the potentially existential question of AI through the very familiar human struggle to reconcile our relationship to power with our values.

Emily S Lin and Marshall Ganz

Conclusion

In The Great Learning, Confucius teaches: “Things have their root and their branches. Affairs have their end and their beginning. To know what is first and what is last will lead near to what is taught in the Great Learning.” In his pursuit of knowledge, Victor Frankenstein makes his way into the heart of wonder and excitement of “the investigation of things” in “unfold[ing]… the deepest mysteries of creation,” but the trap he falls into is that he never finds his way back out. He mistakes the branch for the root, the first for the last, means for ends. Annihilation results.

In this essay, we have suggested that such annihilation, at the societal, personal, and moral levels, is neither inevitable nor impossible. Organizing offers a framework for practicing democracy that grapples with the potentially existential question of AI through the very familiar human struggle to reconcile our relationship to power with our values. This struggle calls us to remember that power “at its best” is not an end itself, but a means of realizing our values. And when we look for pointers toward the paths rooted in our deepest moral values, it is usually emotion that lets us know when we’ve found them. Love is certainly one of those emotions. Another is hope.

In this moment of exponential technological advancement, we appear to face an existential reckoning with power. But this reckoning can be a source not of fear but of learning; not of dehumanization but of rehumanization; not of despair but of hope. Realizing the promise of democracy requires that a society commit to governance based on the equal value of each of our voices, discernment of our common interests through association, deliberation, and learning; and respect for the exercise of individual and collective agency. Our hope is rooted in not just the belief that this is true, and not even in the belief that it is possible, but in our lived experience of practicing and teaching it with others every day. We invite you to join us.

Learn more about the Practicing Democracy Project

The Practicing Democracy Project (PDP), led by Faculty Director Professor Marshall Ganz and housed at the Center for Public Leadership, enables people to work together to develop the leadership; build the community; and create the power to fulfill the democratic promise of equal, inclusive, and collective agency.

Learn More

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

See the collection

Citations
  1. Read, F. (2023, May 8). Jaron Lanier: How humanity can defeat AI: The techno-philosopher on the power of faith. UnHerd. https://unherd.com/2023/05/how-humanity-can-defeat-ai/
  2. Lanier, J. (2023, December 1). Harvard STS Conference on AI and Democracy, Cambridge, MA, United States.
  3. Nadella, S. (2016, June 28). The partnership of the future. Slate. https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html
  4. Ganz, M. (2024). People, power, change: Organizing for democratic renewal. Oxford University Press.
  5. King, M.L. (1967, August 16). Where do we go from here? Address to the Tenth Annual Session of the Southern Christian Leadership Conference. Atlanta, GA, United States.
  6. Ganz, M. & E.S. Lin (2011). Learning to lead: A pedagogy of practice. In S. Snook, N. Nohria, & R. Khurana (Eds.), The handbook for teaching leadership (pp. 353-366). Thousand Oaks, CA: SAGE Publications.
  7. Allen, D. (2023). Justice by means of democracy. University of Chicago Press.
  8. Tocqueville, Alexis de. (1835/1840; 2000). Democracy in America. University of Chicago Press.
  9. Ganz, ibid.
  10. Acemoglu, D. (2024). The simple macroeconomics of AI (NBER Working Paper #32487). National Bureau of Economic Research. https://www.nber.org/system/files/working_papers/w32487/w32487.pdf
  11. Kak, A. (2024, May 31). Panel: Unpacking motivation behind AI Innovation. Ash Center Conference on the Political Economy of AI. Cambridge, MA, USA.
  12. Krein, J. (2024, May 31). Panel: Unpacking motivation behind AI Innovation. Ash Center Conference on the Political Economy of AI. Cambridge, MA, USA.
  13. Anderson, E. (2017). Private government: How employers rule our lives (and why we don’t talk about it). Princeton University Press.

More from this Program

Political Economy of AI Essay Collection

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

Watching the Generative AI Hype Bubble Deflate

Additional Resource

Watching the Generative AI Hype Bubble Deflate

As a part of the Allen Lab’s Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.

More on this Issue

Political Economy of AI Essay Collection

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

Watching the Generative AI Hype Bubble Deflate

Additional Resource

Watching the Generative AI Hype Bubble Deflate

As a part of the Allen Lab’s Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.