Allen Lab Fellow Jeremy McKey reflects on India’s AI Impact Summit, exploring the theme of diffusion and the implications for sovereignty and democracy.
Last month, India and Italy both hosted major international events. India’s was the “AI Impact Summit,” the fourth in a series of global convenings on AI governance that began four years ago in London; Italy’s was the Winter Olympics. Moving between venues in the sweltering traffic of New Delhi, I couldn’t help but reflect on parallels to the colder spectacle unfolding continents away. Like the Winter Games, the AI Summit seemed animated by national competition and shaped by the logic of a scorecard.
By official metrics, India scored impressively. Over 250,000 pledges secured for responsible AI development set a new Guinness World Record. The “New Delhi Declaration’s” 91 signatories comfortably exceed last year’s total in Paris. The Summit’s half million attendees made it, according to the Indian government, the “world’s largest AI Summit” to date. We can reasonably debate what these numbers mean. But more revealing is what the emphasis on numbers signals. To extend the Olympic analogy: in a field where the United States and China dominate the headline events of AI, India and other middle powers are searching for arenas in which they can plausibly contend. India’s answer to that question was clear: diffusion.
The Game of Diffusion
If the focal point of last year’s AI Summit in Paris was sovereignty, this year India effectively hosted the Diffusion Summit. Like sovereignty, diffusion is a goal governments eagerly invoke yet struggle to define. Does it mean the number of users with access to AI-enabled tools? The number of local entrepreneurs building on top of them? The share of the economy permeated by AI systems? In all its productive ambiguity, the concept surfaced everywhere: in the tagline “welfare for all, happiness of all” plastered throughout the city; in panel titles that highlighted democratization and access; in the “Charter for the Democratic Diffusion of AI” endorsed by signatories to the New Delhi Declaration; in the thousands of Indian students attending the Summit, whose presence was justly celebrated as a sort of diffusion itself.
Above all, diffusion carries a hopeful valence. In contrast to frontier model development, often framed in existential terms as a race for decisive first-mover advantage, diffusion appears positive-sum. One country’s success in deploying AI need not diminish another’s attempts. A central message of the Summit was that India, with its massive population and technology-friendly culture, offers a proving ground for AI applications that could accelerate beneficial use not only domestically but globally. Drawing on the familiar trope of India as vishwaguru — teacher to the world — Prime Minister Modi spoke of entrepreneurs who would “design and develop in India… and deliver to humanity.”
But positive-sum does not mean noncompetitive. There are strategic reasons for India to position itself as a global sandbox for AI deployment. That subtext surfaced in a revealing conversation between Nandan Nilekani and Dario Amodei, whose firms, Infosys and Anthropic, had just announced a partnership. “There is a duality,” Amodei observed, “between the fundamental capabilities of a technology and the time it takes for those capabilities to diffuse into the world.” Nilekani put it more plainly: “Diffusion of technology is a different ballgame.” Left unsaid is that the game of diffusion can be won. Being a teacher is a laudable goal; there is also power in standing at the front of a room.
India’s Competitive Advantage
When it comes to diffusion, population scale is one of India’s two distinct advantages. The second is an institutional playbook for public–private innovation, first prototyped in the context of financial inclusion, that now offers a potential roadmap for AI deployment. This playbook goes by the name of “digital public infrastructure.” It was the centerpiece of India’s G20 Leaders’ Summit in 2023, and it made a forceful reprise at the AI Summit, surfacing repeatedly in panel discussions and private conversations as the framework through which India intends to approach AI.
Poster from the MeitY pavilion demonstrating the India Energy Stack. Photo by Jeremy McKey.
One detail in particular captured this logic. In the sprawling Expo, the largest pavilion — at least by my imperfect estimate — belonged not to a private firm, but to MeitY, the ministry organizing the Summit. The exhibit showcased the government’s efforts to build capacity across the AI stack, a theme echoed in the minister’s keynote address. It is unsurprising that a host nation would reserve prime space for itself. What is revealing is that the symbolic center of innovation was a government ministry rather than a national champion company. A state-led model may accelerate diffusion, but as colleagues and I discuss in the context of payments, it also places enormous weight on how the core is governed and on whether the institutional architecture that enables innovation today might harden into tomorrow’s form of lock-in.
Of course, lock-in by domestic actors, whether government ministries or national champions, is not the worst outcome countries fear. The deeper concern, especially for middle powers, is dependence on foreign technology platforms as the foundation for economic and social life that will inevitably be transformed by AI. Though not the primary theme of the Summit, as it had been in Paris, sovereignty was still a core issue in New Delhi. But the rhetoric of sovereignty masked a reality of dependence, as India’s accession to the U.S.-led Pax Silica and the prominence of American firms in major investment announcements revealed. Ironically, as Pablo Chavez has observed, even the United States now speaks fluently in the language of AI sovereignty—envisioned as something that other countries achieve through wholesale adoption of the U.S. stack.
Diffusion and Democracy
In addition to “sovereignty,” the other term that masked complex realities at the New Delhi Summit was “democracy.” Throughout the week, diffusion was frequently described in the language of “democratization,” as though rapidly expanding access to AI tools was synonymous with democratic deepening. I left the Summit puzzled by this implicit equation. Authoritarian regimes can travel along fast diffusion pathways, too. Even within democracies, rapid deployment of technologies can concentrate power, outpace deliberative processes, and undermine accountability. Diffusion is as much a test for democracies as it is an opportunity.
The New Delhi Declaration, for example, speaks of “democratizing AI resources.” Yet this objective is framed primarily at the level of nation-states, emphasizing the need to “promote access to foundational AI resources” across countries. That is a meaningful ambition, broad enough for the United States and China, Russia and Ukraine, Cuba and Iran to all endorse. But ensuring that all countries can deploy AI does not resolve the harder question of how those deployments will be democratically constrained. Expanding access across countries is not the same as strengthening democratic governance within them.
Diffusing AI without entrenching foreign dependence is one challenge. Diffusing it without eroding democratic accountability is another. For middle powers, managing that double balancing act may be the real Olympic test of AI.
Jeremy McKey is an Allen Lab Policy Fellow for the FY25-26 academic year.
The views expressed in this article are those of the author(s) alone and do not necessarily represent the positions of the Ash Center or its affiliates.
Allen Lab Fellow Spotlight: Why a People-Centered Approach to American Democracy Matters Now
Allen Lab Policy Fellow Christine Slaughter makes the case that democracy must be understood through people’s lived experiences and agency, not just institutions.
Transparency is Insufficient: Lessons From Civic Technology for Anticorruption
Allen Lab Researcher David Riveros Garcia draws on his experience building civic technology to fight corruption in Paraguay to make the case that effective civic technology must include power and collective action in its design.
Allen Lab Fellow Spotlight: The Case for Building an AmeriCorps Alumni Leadership Network
In a new essay, The Case for Building an AmeriCorps Alumni Leadership Network, Allen Lab Policy Fellow Sonali Nijhawan argues that the 1.4 million Americans who have completed national service represent an underleveraged civic asset. Drawing on her experience as former Director of AmeriCorps, Nijhawan outlines a roadmap for transforming dispersed alumni into a connected leadership network capable of reinvigorating public service, rebuilding trust in government, and strengthening civic participation.
The Ecosystem of Deliberative Technologies for Public Input
Ensuring public opinion and policy preferences are reflected in policy outcomes is essential to a functional democracy. A growing ecosystem of deliberative technologies aims to improve the input-to-action loop between people and their governments.
In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.
Sunset Section 230 and Unleash the First Amendment
Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.