Commentary  

India & the Olympics of AI

Allen Lab Fellow Jeremy McKey reflects on India’s AI Impact Summit, exploring the theme of diffusion and the implications for sovereignty and democracy.

Photo by NazeerArt, Adobe Stock

Last month, India and Italy both hosted major international events. India’s was the “AI Impact Summit,” the fourth in a series of global convenings on AI governance that began four years ago in London; Italy’s was the Winter Olympics. Moving between venues in the sweltering traffic of New Delhi, I couldn’t help but reflect on parallels to the colder spectacle unfolding continents away. Like the Winter Games, the AI Summit seemed animated by national competition and shaped by the logic of a scorecard.

By official metrics, India scored impressively. Over 250,000 pledges secured for responsible AI development set a new Guinness World Record. The “New Delhi Declaration’s” 91 signatories comfortably exceed last year’s total in Paris. The Summit’s half million attendees made it, according to the Indian government, the “world’s largest AI Summit” to date. We can reasonably debate what these numbers mean. But more revealing is what the emphasis on numbers signals. To extend the Olympic analogy: in a field where the United States and China dominate the headline events of AI, India and other middle powers are searching for arenas in which they can plausibly contend. India’s answer to that question was clear: diffusion.

The Game of Diffusion

If the focal point of last year’s AI Summit in Paris was sovereignty, this year India effectively hosted the Diffusion Summit. Like sovereignty, diffusion is a goal governments eagerly invoke yet struggle to define. Does it mean the number of users with access to AI-enabled tools? The number of local entrepreneurs building on top of them? The share of the economy permeated by AI systems? In all its productive ambiguity, the concept surfaced everywhere: in the tagline “welfare for all, happiness of all” plastered throughout the city; in panel titles that highlighted democratization and access; in the “Charter for the Democratic Diffusion of AI” endorsed by signatories to the New Delhi Declaration; in the thousands of Indian students attending the Summit, whose presence was justly celebrated as a sort of diffusion itself.

Above all, diffusion carries a hopeful valence. In contrast to frontier model development, often framed in existential terms as a race for decisive first-mover advantage, diffusion appears positive-sum. One country’s success in deploying AI need not diminish another’s attempts. A central message of the Summit was that India, with its massive population and technology-friendly culture, offers a proving ground for AI applications that could accelerate beneficial use not only domestically but globally. Drawing on the familiar trope of India as vishwaguru — teacher to the world — Prime Minister Modi spoke of entrepreneurs who would “design and develop in India… and deliver to humanity.”

But positive-sum does not mean noncompetitive. There are strategic reasons for India to position itself as a global sandbox for AI deployment. That subtext surfaced in a revealing conversation between Nandan Nilekani and Dario Amodei, whose firms, Infosys and Anthropic, had just announced a partnership. “There is a duality,” Amodei observed, “between the fundamental capabilities of a technology and the time it takes for those capabilities to diffuse into the world.” Nilekani put it more plainly: “Diffusion of technology is a different ballgame.” Left unsaid is that the game of diffusion can be won. Being a teacher is a laudable goal; there is also power in standing at the front of a room.

India’s Competitive Advantage

When it comes to diffusion, population scale is one of India’s two distinct advantages. The second is an institutional playbook for public–private innovation, first prototyped in the context of financial inclusion, that now offers a potential roadmap for AI deployment. This playbook goes by the name of “digital public infrastructure.” It was the centerpiece of India’s G20 Leaders’ Summit in 2023, and it made a forceful reprise at the AI Summit, surfacing repeatedly in panel discussions and private conversations as the framework through which India intends to approach AI.

Poster from the MeitY pavilion demonstrating the India Energy Stack. Photo by Jeremy McKey.

One detail in particular captured this logic. In the sprawling Expo, the largest pavilion — at least by my imperfect estimate — belonged not to a private firm, but to MeitY, the ministry organizing the Summit. The exhibit showcased the government’s efforts to build capacity across the AI stack, a theme echoed in the minister’s keynote address. It is unsurprising that a host nation would reserve prime space for itself. What is revealing is that the symbolic center of innovation was a government ministry rather than a national champion company. A state-led model may accelerate diffusion, but as colleagues and I discuss in the context of payments, it also places enormous weight on how the core is governed and on whether the institutional architecture that enables innovation today might harden into tomorrow’s form of lock-in.

​​Of course, lock-in by domestic actors, whether government ministries or national champions, is not the worst outcome countries fear. The deeper concern, especially for middle powers, is dependence on foreign technology platforms as the foundation for economic and social life that will inevitably be transformed by AI. Though not the primary theme of the Summit, as it had been in Paris, sovereignty was still a core issue in New Delhi. But the rhetoric of sovereignty masked a reality of dependence, as India’s accession to the U.S.-led Pax Silica and the prominence of American firms in major investment announcements revealed. Ironically, as Pablo Chavez has observed, even the United States now speaks fluently in the language of AI sovereignty—envisioned as something that other countries achieve through wholesale adoption of the U.S. stack.

Diffusion and Democracy

In addition to “sovereignty,” the other term that masked complex realities at the New Delhi Summit was “democracy.” Throughout the week, diffusion was frequently described in the language of “democratization,” as though rapidly expanding access to AI tools was synonymous with democratic deepening. I left the Summit puzzled by this implicit equation. Authoritarian regimes can travel along fast diffusion pathways, too. Even within democracies, rapid deployment of technologies can concentrate power, outpace deliberative processes, and undermine accountability. Diffusion is as much a test for democracies as it is an opportunity.

The New Delhi Declaration, for example, speaks of “democratizing AI resources.” Yet this objective is framed primarily at the level of nation-states, emphasizing the need to “promote access to foundational AI resources” across countries. That is a meaningful ambition, broad enough for the United States and China, Russia and Ukraine, Cuba and Iran to all endorse. But ensuring that all countries can deploy AI does not resolve the harder question of how those deployments will be democratically constrained. Expanding access across countries is not the same as strengthening democratic governance within them.

Diffusing AI without entrenching foreign dependence is one challenge. Diffusing it without eroding democratic accountability is another. For middle powers, managing that double balancing act may be the real Olympic test of AI.

 

Jeremy McKey is an Allen Lab Policy Fellow for the FY25-26 academic year.

The views expressed in this article are those of the author(s) alone and do not necessarily represent the positions of the Ash Center or its affiliates.

Related Resources

Voter Experience Summit Recap

Commentary

Voter Experience Summit Recap

Allen Lab Fellow Hillary Lehr convened a Voter Experience Summit at Harvard’s Ash Center in March, bringing together 25 cross-sector experts to rigorously map the voter journey. This essay explores how that collaborative process could lay the groundwork for new interventions to understand and improve the experience of voting for all.

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.

More on this Issue

AI for Democracy Movements: Toward a New Agenda
A cover photo of the report.

Policy Brief

AI for Democracy Movements: Toward a New Agenda

A new report summarizes key insights from the Nonviolent Action Lab’s December 2025 convening on how artificial intelligence can empower pro-democracy movements.

VIDEOS: After Neoliberalism From Left to Right

Additional Resource

VIDEOS: After Neoliberalism From Left to Right

After Neoliberalism: From Left to Right brought together hundreds of leading economists, political scientists, journalists, writers and thinkers from across the political spectrum to explore and debate emerging visions for the future of the political economy.

Panel videos below.

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Open Access Resource

Crocodile tears: Can the ethical-moral intelligence of AI models be trusted?

Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency in their recently published paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted? in Springer AI & Ethics.