Additional Resource  

Watching the Generative AI Hype Bubble Deflate

As a part of the Allen Lab’s Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.

Photo by m, Unsplash

An archival PDF of this essay can be found here.

Only a few short months ago, generative AI was sold to us as inevitable by AI company leaders, their partners, and venture capitalists. Certain media outlets promoted these claims, fueling online discourse about what each new beta release could accomplish with a few simple prompts. As AI became a viral sensation, every business tried to become an AI business. Some even added “AI” to their names to juice their stock prices,1 and companies that mentioned “AI” in their earnings calls saw similar increases.2

Investors and consultants urged businesses not to get left behind. Morgan Stanley positioned AI as key to a $6 trillion opportunity.3 McKinsey hailed generative AI as “the next productivity frontier” and estimated $2.6 to 4.4 trillion gains,4 comparable to the annual GDP of the United Kingdom or all the world’s agricultural production.5 6 Conveniently, McKinsey also offers consulting services to help businesses “create unimagined opportunities in a constantly changing world.”7 Readers of this piece can likely recall being exhorted by news media or their own industry leaders to “learn AI” while encountering targeted ads hawking AI “boot camps.”

While some have long been wise to the hype,8 9 10 11 global financial institutions and venture capitalists are now beginning to ask if generative AI is overhyped.12 In this essay, we argue that even as the generative AI hype bubble slowly deflates, its harmful effects will last: carbon can’t be put back in the ground, workers continue to face AI’s disciplining pressures, and the poisonous effect on our information commons will be hard to undo.

Historical Hype Cycles in the Digital Economy

Photo by Museums Victoria, Unsplash

Attempts to present AI as desirable, inevitable, and as a more stable concept than it actually is follow well-worn historical patterns.13 A key strategy for a technology to gain market share and buy-in is to present it as an inevitable and necessary part of future infrastructure, encouraging the development of new, anticipatory infrastructures around it. From the early history of automobiles and railroads to the rise of electricity and computers, this dynamic has played a significant role. All these technologies required major infrastructure investments — roads, tracks, electrical grids, and workflow changes — to become functional and dominant. None were inevitable, though they may appear so in retrospect.14 15 16 17

The well-known phrase “nobody ever got fired for buying IBM” is a good, if partial, historical analogue to the current feeding frenzy around AI. IBM, while expensive, was a recognized leader in automating workplaces, ostensibly to the advantage of those corporations. IBM famously re-engineered the environments where its systems were installed, ensuring that office infrastructures and workflows were optimally reconfigured to fit its computers, rather than the other way around. Similarly, AI corporations have repeatedly claimed that we are in a new age of not just adoption but of proactive adaptation to their technology. Ironically, in AI waves past, IBM itself over-promised and under-delivered; some described their “Watson AI” product as a “mismatch” for the health care context it was sold for, while others described it as “dangerous.”18 Time and again, AI has been crowned as an inevitable “advance” despite its many problems and shortcomings: built-in biases, inaccurate results, privacy and intellectual property violations, and voracious energy use.

Nevertheless, in the media and — early on at least — among investors and corporations seeking to profit, AI has been publicly presented as unstoppable.19 20 21 This was a key form of rhetoric came from those eager to pave the way for a new set of heavily funded technologies; it was never a statement of fact about the technology’s robustness, utility, or even its likely future utility. Rather, it reflects a standard stage in the development of many technologies, where a technology’s manufacturers, boosters, and investors attempt to make it indispensable by integrating it, often prematurely, into existing infrastructures and workflows, counting on this entanglement to “save a spot” for the technology to be more fully integrated in the future. The more far-reaching this early integration, the more difficult it will be to disentangle or roll back the attendant changes–meaning that even broken or substandard technologies stand a better chance of becoming entrenched.22

In the case of AI, however, as with many other recent technology booms or boomlets (from blockchain to the metaverse to clunky VR goggles23 24), this stage was also accompanied by severe criticism of both the rhetorical positioning of the technology as indispensable and of the technology’s current and potential states. Historically, this form of critique is an important stage of technological development, offering consumers, users, and potential users a chance to alter or improve upon the technology by challenging designers’ assumptions before the “black box” of the technology is closed.25 It also offers a small and sometimes unlikely — but not impossible — window for partial or full rejection of the technology.

...even as the Generative AI hype bubble slowly deflates, its harmful effects will last: carbon can’t be put back in the ground, workers continue to need to fend off AI’s disciplining effects, and the poisonous effect on our information commons will be hard to undo.

David Gray Widder and Mar Hicks

Deflating the Generative AI Bubble

While talk of a bubble has simmered beneath the surface while the money faucet continues to flow,26 we observe a recent inflection point. Interlocutors are beginning to sound the alarm that AI is overvalued. The perception that AI is a bubble, rather than a gold rush, is making its way into wider discourse with increasing frequency and strength. The more industry bosses protest that it’s not a bubble,27 the more people have begun to look twice.

For instance, users and artists slammed Adobe for ambiguous statements about using customers’ creative work to train generative AI, forcing the company to later clarify that it would only do so in specific circumstances. At the same time, the explicit promise of not using customer data for AI training has started to become a selling point for others, with a rival positioning their product as “not a trick to access your media for AI training.”28 Another company boasted a “100% LLM [large-language model]-Free” product, spotlighting that it “never present[s] chatbot[s] that act human or imitate human experts.”29 Even major players like Amazon and Google have attempted to lower business expectations for generative AI, recognizing its expense, accuracy issues, and as yet uncertain value proposition.30 Nonetheless, they have done so in ways that attempt to preserve the hype surrounding AI, which will likely remain profitable for their cloud businesses.

It’s not just technology companies questioning something they initially framed as inevitable. Recently, venture capital firm Sequoia Capital said that “the AI bubble is reaching a tipping point”31 after failing to find a satisfactory answer to a question they posed last year: “Where is all the revenue?”32 Similarly, in Goldman Sachs’ recent report, “Gen AI: too much spend, too little benefit?”,33 their global head of equity research stated, “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.” Still, the report tellingly notes that even if AI doesn’t “deliver on its promise,” it may still generate investor returns, as “bubbles take a long time to burst.” In short, financial experts are pointing out that capital expenditures on things like graphics cards or cloud compute have not been met by commensurate revenue, nor does there seem to be a clear pathway to remedy this. This shift is a recognizable stage in which a product and its promoters do not suffer swift devaluation but begin to lose their top spots on the NASDAQ and other major exchanges.

Why is this happening? Technically, large-language models (LLMs) continue to produce erroneous but confident text (“hallucinations”) because they are inherently probabilistic machines, and no clear fixes exist because this is a fundamental feature of how the technology works.34  In many cases, LLMs fail to automate the labor that CEOs confidently claimed they could, and instead often decrease employee productivity.35 Economically, interest rates have risen, so “easy money” is no longer available to fund boosters’ loftiest and horrifically expensive generative AI dreams.36 Meanwhile, federal regulators have intensified their scrutiny, even as they struggle to reign in social media platforms. FTC chair Lina Khan has said, “There is no AI exemption to the laws on the books,” encouraging regulators to apply standard regulatory tools to AI.37 Legally, after misappropriating or allegedly stealing much of their training data during early generative AI development, companies now face lawsuits and must pay for their inputs.38 Public discourse is catching up too. We were promised that AI would automate tedious tasks, freeing people for more fulfilling work. Increasingly, users recognize that these technologies are built to “do my art and writing so that I can do my laundry and dishes,” in the words of one user, rather than the reverse.39

Today’s hype will have lasting effects that constrain tomorrow’s possibilities.

David Gray Widder and Mar Hicks

Hype’s Harmful Effects Are Not Easily Reversed

Photo by Matt Palmer, Unsplash

While critics of any technology bubble may feel vindicated by seeing it pop — and when stock markets and the broader world catch up with their gimlet-eyed early critiques — those who have been questioning the AI hype also know that the deflation, or even popping, of the bubble does not undo the harm already caused. Hype has material and often harmful effects in the real world. The ephemerality of these technologies is grounded in real-world resources, bodies, and lives, reminiscent of the destructive industrial practices of past ages. Decades of regulation were required to roll back the environmental and public health harms of technologies we no longer use, from short-lived ones like radium to longer-lived ones like leaded gasoline.40 41 Even ephemeral phenomena can have long-lasting negative effects.

The hype around AI has already impacted climate goals. In the United States, plans to retire polluting coal power plants have slowed by 40%, with politicians and industry lobbyists citing the need to win the “AI war.”42 Microsoft, which  planned to be carbon negative by 2030,43 walked back that goal after its 2023 emissions were 30% higher than 2020.44 Brad Smith, its president, said that this “moonshot” goal was made before the “explosion in artificial intelligence,” and now “the moon is five times as far away,” with AI as the driving factor. After firing employees for raising concern about generative AI’s environmental costs,45 46 Google has also seen its emissions increase and no longer claims to be carbon-neutral while pushing its net-zero emissions goal date further into the future.47 This carbon can’t be unburned, and the breathless discourse surrounding AI has helped ratchet up the existing climate emergency, providing justification for companies to renege on their already-imperiled environmental promises.48

The discourse surrounding AI will also have lasting effects on labor. Some workers will see the scope of their work reduced, while others will face wage stagnation or cuts owing to the threat, however empty, that they might be replaced with poor facsimiles of themselves. Creative industries are especially at risk: as illustrator Molly Crabapple states, while demand for high-end human-created art may remain, generative AI will harm many working illustrators, as editors opt for generative AI’s fast and low-cost illustrations over original creative output.49 Even as artists mobilize with technical and regulatory countermeasures,50 this burden distracts from their artistic pursuits. Unions such as SAG-AFTRA have won meager protections against AI,51 and while this hot-button issue perhaps raised the profile of their strike, it also distracted from other crucial contract negotiations. Even if generative AI doesn’t live up to the hype, its effect on how we value creative work may be hard to shake, leaving creative workers to reclaim every inch lost during the AI boom.

Photo by Jon Tyson, Unsplash

Lastly, generative AI will have long-term effects on our information commons. The ingestion of massive amounts of user-generated data, text, and artwork — often in ways that appear to violate copyright and fair use — has pushed us closer to the enclosure of the information commons by corporations.52 Google’s AI search snippets tool, for example, authoritatively suggested putting glue in pizza and recommended eating at least one small rock per day.53 While these may seem obvious enough to be harmless, most AI-generated misinformation is not so easy to detect. The increasing prevalence of AI-generated nonsense on the internet will make it harder to find trusted information, allow misinformation to propagate, and erode trust in sources we used to count on for reliable information.

A key question remains, and we may never have a satisfactory answer: what if the hype was always meant to fail? What if the point was to hype things up, get in, make a profit, and entrench infrastructure dependencies before critique, or reality, had a chance to catch up?54 Path dependency is well understood by historians of technology and those seeking to profit from AI. Today’s hype will have lasting effects that constrain tomorrow’s possibilities. Using the AI hype to shift more of our infrastructure to the cloud increases dependency on cloud companies, creating dependencies that will be hard to undo even as inflated promises for AI are dashed.

Inventors, technologists, corporations, boosters, and investors regularly seek to create inevitability, in part by encouraging a discourse of inexorable technological “progress” tied to their latest investment vehicle. By referencing past technologies, which now seem natural and necessary, they claim their current developments (tautologically) as inevitable. Yet the efforts to make AI indispensable on a large scale, culturally, technologically, and economically, have not lived up to their promises. In a sense, this is not surprising, as generative AI does not so much represent the wave of the future as it does the ebb and flow of waves past.

Acknowledgements

We are grateful to Ali Alkhatib, Sireesh Gururaja, and Alex Hanna for their insightful comments on earlier drafts.

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

See the collection

Citations
  1. Benzinga. “Stocks With ‘AI’ In the Name Are Soaring: Could It Be The Next Crypto-, Cannabis-Style Stock Naming Craze?” Markets Insider, January 31, 2023. https://markets.businessinsider.com/news/stocks/stocks-with-ai-in-the-name-are-soaring-could-it-be-the-next-crypto-cannabis-stock-naming-craze-1032055463.
  2. Wiltermuth, Joy. “AI Talk Is Surging during Company Earnings Calls — and so Are Those Companies’ Shares.” Market Watch, March 16, 2024. https://www.marketwatch.com/story/ai-talk-is-surging-during-company-earnings-calls-and-so-are-those-companies-shares-f924d91a.
  3. Morgan Stanley. “The $6 Trillion Opportunity in AI.” April 18, 2023. https://www.morganstanley.com/ideas/generative-ai-growth-opportunity.
  4. Chui, Michael, Roger Roberts, Lareina Yee, et al. “The Economic Potential of Generative AI: The Next Productivity Frontier.” McKinsey & Company, June 14, 2023. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights.
  5. Statista, May 2024. https://www.statista.com/outlook/io/agriculture/worldwide.
  6. World Bank. “United Kingdom.” World Bank Open Data, 2023. https://data.worldbank.org.
  7. McKinsey & Company. “Quantum Black – AI by McKinsey,” 2024. http://ceros.mckinsey.com/qb-overview-desktop-2-1.
  8. Bender, Emily M., and Alex Hanna. “Mystery AI Hype Theater 3000.” The Distributed AI Research Institute, 2024. https://www.dair-institute.org/maiht3k/.
  9. Marcus, Gary. “The Great AI Retrenchment Has Begun.” Marcus on AI, June 15, 2024. https://garymarcus.substack.com/p/the-great-ai-retrenchment-has-begun.
  10. Marx, Paris. “The ChatGPT Revolution Is Another Tech Fantasy.” Disconnect, July 27, 2023. https://disconnect.blog/the-chatgpt-revolution-is-another/.
  11. Hanna, Alex. “The Grimy Residue of the AI Bubble.” Mystery AI Hype Theater 3000: The Newsletter , July 25, 2024. https://buttondown.email/maiht3k/archive/the-grimy-residue-of-the-ai-bubble/.
  12. Nathan, Allison, Jenny Grimberg, and Ashley Rhodes. “Gen AI: Too Much Spend, Too Little Benefit?” Top of Mind. Goldman Sachs Global Macro Research, June 25, 2024. https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf.
  13. Suchman, Lucy. “The Uncontroversial ‘Thingness’ of AI.” Big Data & Society 10, no. 2 (July 2023): 20539517231206794. https://doi.org/10.1177/20539517231206794.
  14. Oldenziel, Ruth, M. Luísa Sousa, and Pieter van Wesemael. “Designing (Un)Sustainable Urban Mobility from Transnational Settings, 1850-Present.” In A U-Turn to the Future: Sustainable Urban M obility since 1850, edited by Martin Emanuel, Frank Schipper, and Ruth Oldenziel. Berghahn Books, 2020.
  15. Nye, David E. Electrifying America: Social Meanings of a New Technology, 1880-1940.  MIT Press, 2001.
  16. Hicks, Mar. Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing. History of Computing. The MIT Press, 2018.
  17. Burrell, Jenna. “Artificial Intelligence and the Ever-Receding Horizon of the Future.” Tech Policy Press, June 6, 2023. https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future.
  18. Strickland, Eliza. “How IBM Watson Overpromised and Underdelivered on AI Health Care.” IEEE Spectrum, April 2, 2019. https://spectrum.ieee.org/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care.
  19. Taylor, Josh. “Rise of Artificial Intelligence Is Inevitable but Should Not Be Feared, ‘Father of AI’ Says.” The Guardian, May 7, 2023. https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says.
  20. Shapiro, Daniel. “Artificial Intelligence: It’s Complicated And Unsettling, But Inevitable.” Forbes, September 10, 2019. https://www.forbes.com/sites/danielshapiro1/2019/09/10/artificial-intelligence-its-complicated-and-unsettling-but-inevitable/.
  21. Raasch, Jon Michael. “In Education, ‘AI Is Inevitable,’ and Students Who Don’t Use It Will ‘Be at a Disadvantage’: AI Founder.” FOX Business, June 22, 2023. https://www.foxbusiness.com/technology/education-ai-inevitable-students-use-disadvantage-ai-founder.
  22. Halcyon Lawrence explores this dynamic with speech recognition technologies that were unable to recognize the accents of the majority of global English speakers for much of their existence.
    Lawrence, Halcyon M. “Siri Disciplines.” In Your Computer Is on Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip. The MIT Press, 2021.
  23. Axon, Samuel. “RIP (Again): Google Glass Will No Longer Be Sold.” Ars Technica, March 16, 2023. https://arstechnica.com/gadgets/2023/03/google-glass-is-about-to-be-discontinued-again/.
  24. Barr, Kyle. “Apple Vision Pro U.S. Sales Are All But Dead, Market Analysts Say.” Gizmodo, July 11, 2024. https://gizmodo.com/apple-vision-pro-u-s-sales-2000469302.
  25. Kline, Ronald, and Trevor Pinch. “Users as Agents of Technological Change: The Social Construction of the Automobile in the Rural United States.” Technology and Culture 37, no. 4 (1996): 763–95. https://doi.org/10.2307/3107097.
  26. Celarier, Michelle. “Money Is Pouring Into AI. Skeptics Say It’s a ‘Grift Shift.’” Institutional Investor, August 29, 2023. https://www.institutionalinvestor.com/article/2c4fad0w6irk838pca3gg/portfolio/money-is-pouring-into-ai-skeptics-say-its-a-grift-shift.
  27. Bratton, Laura, and Britney Nguyen. “The AI Craze Is No Dot-Com Bubble. Here’s Why.” Quartz, April 15, 2024. https://qz.com/ai-stocks-dot-com-bubble-nvidia-google-microsoft-amazon-1851407019.
  28. Gray, Jeremy. “Blackmagic Taunts Adobe Following Terms of Use Controversy.” PetaPixel, June 28, 2024. https://petapixel.com/2024/06/28/blackmagic-taunts-adobe-following-terms-of-use-controversy/.
  29. Inqwire. “Inqwire.” Accessed July 29, 2024. https://www.inqwire.io/www.inqwire.io.
  30. Gardizy, Anissa, and Aaron Holmes. “Amazon, Google Quietly Tamp Down Generative AI Expectations.” The Information, March 12, 2024.
  31. Cahn, David. “AI’s $600B Question.” Sequoia Capital, June 20, 2024. https://www.sequoiacap.com/article/ais-600b-question/.
  32. Cahn, David. “AI’s $200B Question.” Sequoia Capital, September 20, 2023. https://www.sequoiacap.com/article/follow-the-gpus-perspective/.
  33. Nathan, Allison, Jenny Grimberg, and Ashley Rhodes. “Gen AI: Too Much Spend, Too Little Benefit?” Top of Mind. Goldman Sachs Global Macro Research, June 25, 2024. https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf.
  34. Leffer, Lauren. “Hallucinations Are Baked into AI Chatbots.” Scientific American, April 5, 2024. https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/.
  35. Robinson, Bryan. “77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds.” Forbes, July 23, 2024. https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/.
  36. Karma, Rogé. “The Era of Easy Money Is Over. That’s a Good Thing.” The Atlantic, December 11, 2023. https://www.theatlantic.com/ideas/archive/2023/12/higher-interest-rates-fed-economy/676282/.
  37. Khan, Lina. “Statement of Chair Lina M. Khan Regarding the Joint Interagency Statement on AI.” Federal Trade Commission, April 25, 2023.
  38. O’Donnell, James. “Training AI Music Models Is about to Get Very Expensive.” MIT Technology Review, June 27, 2024. https://www.technologyreview.com/2024/06/27/1094379/ai-music-suno-udio-lawsuit-record-labels-youtube-licensing/.
  39. Joanna Maciejewska (AuthorJMac), “I Want AI to Do My Laundry and Dishes so That I Can Do Art and Writing…” X (formerly Twitter), March 29, 2024. https://x.com/AuthorJMac/status/1773679197631701238.
  40. Clark, Claudia. Radium Girls, Women and Industrial Health Reform: 1910-1935. Chapel Hill, NC: University of North Carolina Press, 1997.
  41. Nader, Ralph. Unsafe at Any Speed: The Designed-in Dangers of the American Automobile. Grossman, 1965.
  42. Chu, Amanda. “US Slows Plans to Retire Coal-Fired Plants as Power Demand from AI Surges.” Financial Times, May 30, 2024. https://web.archive.org/web/20240702094041/https://www.ft.com/content/ddaac44b-e245-4c8a-bf68-c773cc8f4e63.
  43. Smith, Brad. “Microsoft Will Be Carbon Negative by 2030.” The Official Microsoft Blog, January 16, 2020. https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/.
  44. Rathi, Akshat, Bass, Dina, and Rao, Mythili. “A Big Bet on AI Is Putting Microsoft’s Climate Targets at Risk.” Bloomberg, May 23, 2024. https://www.bloomberg.com/news/articles/2024-05-23/a-big-bet-on-ai-is-putting-microsoft-s-climate-targets-at-risk.
  45. Simonite, Tom. “What Really Happened When Google Ousted Timnit Gebru.” Wired, June 8, 2021. https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/.
  46. Bender, Emily M., and Alex Hanna. “Mystery AI Hype Theater 3000.” The Distributed AI Research Institute, 2024. https://www.dair-institute.org/maiht3k/.
  47. Rathi, Akshat. “Google Is No Longer Claiming to Be Carbon Neutral.” Bloomberg, July 8, 2024. https://www.bloomberg.com/news/articles/2024-07-08/google-is-no-longer-claiming-to-be-carbon-neutral.
  48. Kneese, Tamara, and Meg Young. “Carbon Emissions in the Tailpipe of Generative AI.” Harvard Data Science Review, Special Issue 5 (June 11, 2024). https://doi.org/10.1162/99608f92.fbdf6128.
  49. Crabapple, Molly, and Paris Marx. “Why AI Is a Threat to Artists, with Molly Crabapple.” Tech Won’t Save Us, June 29, 2023. https://techwontsave.us/episode/174_why_ai_is_a_threat_to_artists_w_molly_crabapple.html.
  50. Jiang, Harry H., Lauren Brown, Jessica Cheng, et al. “AI Art and Its Impact on Artists.” Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 363–74. AIES ’23. Association for Computing Machinery, 2023. https://doi.org/10.1145/3600211.3604681.
  51. Frawley, Chris. “Unpacking SAG-AFTRA’s New AI Regulations: What Actors Should Know.” Backstage, January 18, 2024. https://www.backstage.com/magazine/article/sag-aftra-ai-deal-explained-76821/.
  52. See Noble, Algorithms of Oppression, for a fuller discussion of how the U.S. (and global) online ecosystem has been reconfigured to fall firmly under the control of for-profit companies making billions, mostly through advertising revenue.
  53. Kelly, Jack. “Google’s AI Recommends Glue on Pizza: What Caused These Viral Blunders?” Forbes, May 31, 2024. https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-glue-to-pizza-viral-blunders/.
  54. Some financial self-regulatory authorities have even added warnings about AI pump-and-dump schemes. Financial Industry Regulatory Authority. “Avoid Fraud: Artificial Intelligence (AI) and Investment Fraud.” January 25, 2024. https://www.finra.org/investors/insights/artificial-intelligence-and-investment-fraud.

More from this Program

The Role of AI in the 2024 Elections

Video

The Role of AI in the 2024 Elections

The year 2024 was dubbed “the largest election year in global history” with half the world’s population voting in national elections. Earlier this year, we hosted an event on AI and the 2024 Elections where scholars spoke about the potential influence of artificial intelligence on the election cycle– from misinformation to threats on election infrastructure. This webinar offered a reflection and exploration of the impacts of technology on the 2024 election landscape.

Political Economy of AI Essay Collection

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

AI and Practicing Democracy

Additional Resource

AI and Practicing Democracy

As a part of the Allen Lab’s Political Economy of AI Essay Collection, Emily S Lin and Marshall Ganz call on us to reckon with how humans create, exercise, and structure power, in hopes of meeting our current technological moment in a way that aligns with our values.

More on this Issue

The Role of AI in the 2024 Elections

Video

The Role of AI in the 2024 Elections

The year 2024 was dubbed “the largest election year in global history” with half the world’s population voting in national elections. Earlier this year, we hosted an event on AI and the 2024 Elections where scholars spoke about the potential influence of artificial intelligence on the election cycle– from misinformation to threats on election infrastructure. This webinar offered a reflection and exploration of the impacts of technology on the 2024 election landscape.

Political Economy of AI Essay Collection

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.