
Policy Brief
GETTING-Plurality Comments on Modernizing the Privacy Act of 1974
The GETTING-Plurality Research Network submitted a comment to Representative Trahan’s Request for Information to modernize the Privacy Act of 1974.
Additional Resource
As a part of the Allen Lab’s Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.
An archival PDF of this essay can be found here.
Only a few short months ago, generative AI was sold to us as inevitable by AI company leaders, their partners, and venture capitalists. Certain media outlets promoted these claims, fueling online discourse about what each new beta release could accomplish with a few simple prompts. As AI became a viral sensation, every business tried to become an AI business. Some even added “AI” to their names to juice their stock prices,1 and companies that mentioned “AI” in their earnings calls saw similar increases.2
Investors and consultants urged businesses not to get left behind. Morgan Stanley positioned AI as key to a $6 trillion opportunity.3 McKinsey hailed generative AI as “the next productivity frontier” and estimated $2.6 to 4.4 trillion gains,4 comparable to the annual GDP of the United Kingdom or all the world’s agricultural production.5 6 Conveniently, McKinsey also offers consulting services to help businesses “create unimagined opportunities in a constantly changing world.”7 Readers of this piece can likely recall being exhorted by news media or their own industry leaders to “learn AI” while encountering targeted ads hawking AI “boot camps.”
While some have long been wise to the hype,8 9 10 11 global financial institutions and venture capitalists are now beginning to ask if generative AI is overhyped.12 In this essay, we argue that even as the generative AI hype bubble slowly deflates, its harmful effects will last: carbon can’t be put back in the ground, workers continue to face AI’s disciplining pressures, and the poisonous effect on our information commons will be hard to undo.
Attempts to present AI as desirable, inevitable, and as a more stable concept than it actually is follow well-worn historical patterns.13 A key strategy for a technology to gain market share and buy-in is to present it as an inevitable and necessary part of future infrastructure, encouraging the development of new, anticipatory infrastructures around it. From the early history of automobiles and railroads to the rise of electricity and computers, this dynamic has played a significant role. All these technologies required major infrastructure investments — roads, tracks, electrical grids, and workflow changes — to become functional and dominant. None were inevitable, though they may appear so in retrospect.14 15 16 17
The well-known phrase “nobody ever got fired for buying IBM” is a good, if partial, historical analogue to the current feeding frenzy around AI. IBM, while expensive, was a recognized leader in automating workplaces, ostensibly to the advantage of those corporations. IBM famously re-engineered the environments where its systems were installed, ensuring that office infrastructures and workflows were optimally reconfigured to fit its computers, rather than the other way around. Similarly, AI corporations have repeatedly claimed that we are in a new age of not just adoption but of proactive adaptation to their technology. Ironically, in AI waves past, IBM itself over-promised and under-delivered; some described their “Watson AI” product as a “mismatch” for the health care context it was sold for, while others described it as “dangerous.”18 Time and again, AI has been crowned as an inevitable “advance” despite its many problems and shortcomings: built-in biases, inaccurate results, privacy and intellectual property violations, and voracious energy use.
Nevertheless, in the media and — early on at least — among investors and corporations seeking to profit, AI has been publicly presented as unstoppable.19 20 21 This was a key form of rhetoric came from those eager to pave the way for a new set of heavily funded technologies; it was never a statement of fact about the technology’s robustness, utility, or even its likely future utility. Rather, it reflects a standard stage in the development of many technologies, where a technology’s manufacturers, boosters, and investors attempt to make it indispensable by integrating it, often prematurely, into existing infrastructures and workflows, counting on this entanglement to “save a spot” for the technology to be more fully integrated in the future. The more far-reaching this early integration, the more difficult it will be to disentangle or roll back the attendant changes–meaning that even broken or substandard technologies stand a better chance of becoming entrenched.22
In the case of AI, however, as with many other recent technology booms or boomlets (from blockchain to the metaverse to clunky VR goggles23 24), this stage was also accompanied by severe criticism of both the rhetorical positioning of the technology as indispensable and of the technology’s current and potential states. Historically, this form of critique is an important stage of technological development, offering consumers, users, and potential users a chance to alter or improve upon the technology by challenging designers’ assumptions before the “black box” of the technology is closed.25 It also offers a small and sometimes unlikely — but not impossible — window for partial or full rejection of the technology.
...even as the Generative AI hype bubble slowly deflates, its harmful effects will last: carbon can’t be put back in the ground, workers continue to need to fend off AI’s disciplining effects, and the poisonous effect on our information commons will be hard to undo.David Gray Widder and Mar Hicks
While talk of a bubble has simmered beneath the surface while the money faucet continues to flow,26 we observe a recent inflection point. Interlocutors are beginning to sound the alarm that AI is overvalued. The perception that AI is a bubble, rather than a gold rush, is making its way into wider discourse with increasing frequency and strength. The more industry bosses protest that it’s not a bubble,27 the more people have begun to look twice.
For instance, users and artists slammed Adobe for ambiguous statements about using customers’ creative work to train generative AI, forcing the company to later clarify that it would only do so in specific circumstances. At the same time, the explicit promise of not using customer data for AI training has started to become a selling point for others, with a rival positioning their product as “not a trick to access your media for AI training.”28 Another company boasted a “100% LLM [large-language model]-Free” product, spotlighting that it “never present[s] chatbot[s] that act human or imitate human experts.”29 Even major players like Amazon and Google have attempted to lower business expectations for generative AI, recognizing its expense, accuracy issues, and as yet uncertain value proposition.30 Nonetheless, they have done so in ways that attempt to preserve the hype surrounding AI, which will likely remain profitable for their cloud businesses.
It’s not just technology companies questioning something they initially framed as inevitable. Recently, venture capital firm Sequoia Capital said that “the AI bubble is reaching a tipping point”31 after failing to find a satisfactory answer to a question they posed last year: “Where is all the revenue?”32 Similarly, in Goldman Sachs’ recent report, “Gen AI: too much spend, too little benefit?”,33 their global head of equity research stated, “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.” Still, the report tellingly notes that even if AI doesn’t “deliver on its promise,” it may still generate investor returns, as “bubbles take a long time to burst.” In short, financial experts are pointing out that capital expenditures on things like graphics cards or cloud compute have not been met by commensurate revenue, nor does there seem to be a clear pathway to remedy this. This shift is a recognizable stage in which a product and its promoters do not suffer swift devaluation but begin to lose their top spots on the NASDAQ and other major exchanges.
Why is this happening? Technically, large-language models (LLMs) continue to produce erroneous but confident text (“hallucinations”) because they are inherently probabilistic machines, and no clear fixes exist because this is a fundamental feature of how the technology works.34 In many cases, LLMs fail to automate the labor that CEOs confidently claimed they could, and instead often decrease employee productivity.35 Economically, interest rates have risen, so “easy money” is no longer available to fund boosters’ loftiest and horrifically expensive generative AI dreams.36 Meanwhile, federal regulators have intensified their scrutiny, even as they struggle to reign in social media platforms. FTC chair Lina Khan has said, “There is no AI exemption to the laws on the books,” encouraging regulators to apply standard regulatory tools to AI.37 Legally, after misappropriating or allegedly stealing much of their training data during early generative AI development, companies now face lawsuits and must pay for their inputs.38 Public discourse is catching up too. We were promised that AI would automate tedious tasks, freeing people for more fulfilling work. Increasingly, users recognize that these technologies are built to “do my art and writing so that I can do my laundry and dishes,” in the words of one user, rather than the reverse.39
Today’s hype will have lasting effects that constrain tomorrow’s possibilities.David Gray Widder and Mar Hicks
While critics of any technology bubble may feel vindicated by seeing it pop — and when stock markets and the broader world catch up with their gimlet-eyed early critiques — those who have been questioning the AI hype also know that the deflation, or even popping, of the bubble does not undo the harm already caused. Hype has material and often harmful effects in the real world. The ephemerality of these technologies is grounded in real-world resources, bodies, and lives, reminiscent of the destructive industrial practices of past ages. Decades of regulation were required to roll back the environmental and public health harms of technologies we no longer use, from short-lived ones like radium to longer-lived ones like leaded gasoline.40 41 Even ephemeral phenomena can have long-lasting negative effects.
The hype around AI has already impacted climate goals. In the United States, plans to retire polluting coal power plants have slowed by 40%, with politicians and industry lobbyists citing the need to win the “AI war.”42 Microsoft, which planned to be carbon negative by 2030,43 walked back that goal after its 2023 emissions were 30% higher than 2020.44 Brad Smith, its president, said that this “moonshot” goal was made before the “explosion in artificial intelligence,” and now “the moon is five times as far away,” with AI as the driving factor. After firing employees for raising concern about generative AI’s environmental costs,45 46 Google has also seen its emissions increase and no longer claims to be carbon-neutral while pushing its net-zero emissions goal date further into the future.47 This carbon can’t be unburned, and the breathless discourse surrounding AI has helped ratchet up the existing climate emergency, providing justification for companies to renege on their already-imperiled environmental promises.48
The discourse surrounding AI will also have lasting effects on labor. Some workers will see the scope of their work reduced, while others will face wage stagnation or cuts owing to the threat, however empty, that they might be replaced with poor facsimiles of themselves. Creative industries are especially at risk: as illustrator Molly Crabapple states, while demand for high-end human-created art may remain, generative AI will harm many working illustrators, as editors opt for generative AI’s fast and low-cost illustrations over original creative output.49 Even as artists mobilize with technical and regulatory countermeasures,50 this burden distracts from their artistic pursuits. Unions such as SAG-AFTRA have won meager protections against AI,51 and while this hot-button issue perhaps raised the profile of their strike, it also distracted from other crucial contract negotiations. Even if generative AI doesn’t live up to the hype, its effect on how we value creative work may be hard to shake, leaving creative workers to reclaim every inch lost during the AI boom.
Lastly, generative AI will have long-term effects on our information commons. The ingestion of massive amounts of user-generated data, text, and artwork — often in ways that appear to violate copyright and fair use — has pushed us closer to the enclosure of the information commons by corporations.52 Google’s AI search snippets tool, for example, authoritatively suggested putting glue in pizza and recommended eating at least one small rock per day.53 While these may seem obvious enough to be harmless, most AI-generated misinformation is not so easy to detect. The increasing prevalence of AI-generated nonsense on the internet will make it harder to find trusted information, allow misinformation to propagate, and erode trust in sources we used to count on for reliable information.
A key question remains, and we may never have a satisfactory answer: what if the hype was always meant to fail? What if the point was to hype things up, get in, make a profit, and entrench infrastructure dependencies before critique, or reality, had a chance to catch up?54 Path dependency is well understood by historians of technology and those seeking to profit from AI. Today’s hype will have lasting effects that constrain tomorrow’s possibilities. Using the AI hype to shift more of our infrastructure to the cloud increases dependency on cloud companies, creating dependencies that will be hard to undo even as inflated promises for AI are dashed.
Inventors, technologists, corporations, boosters, and investors regularly seek to create inevitability, in part by encouraging a discourse of inexorable technological “progress” tied to their latest investment vehicle. By referencing past technologies, which now seem natural and necessary, they claim their current developments (tautologically) as inevitable. Yet the efforts to make AI indispensable on a large scale, culturally, technologically, and economically, have not lived up to their promises. In a sense, this is not surprising, as generative AI does not so much represent the wave of the future as it does the ebb and flow of waves past.
We are grateful to Ali Alkhatib, Sireesh Gururaja, and Alex Hanna for their insightful comments on earlier drafts.
Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.
Policy Brief
The GETTING-Plurality Research Network submitted a comment to Representative Trahan’s Request for Information to modernize the Privacy Act of 1974.
Commentary
Allen Lab for Democracy Renovation Fellow Dr. Shlomit Wagman lays out a framework to address the threats artificial intelligence poses to global security and democratic institutions.
Additional Resource
In a recent piece for Tech Policy Press, Allen Lab Senior Fellow Alex Pascal and Nathan Sanders outline how US states are well-positioned to lead the development of Public AI. State governments can act as “laboratories of twenty-first century democracy” to experiment with AI applications that directly benefit citizens.
Feature
What kind of democracy do legislators want? This question was at the center of a recent discussion with Melody Crowder-Meyer, associate professor of political science at Davidson College, as part of the American Politics Speaker Series.
Policy Brief
The GETTING-Plurality Research Network submitted a comment to Representative Trahan’s Request for Information to modernize the Privacy Act of 1974.
Commentary
At a recent Ash Center panel, experts and AI developers discuss how AI’s influence on politics has evolved over the years. They examine the new tools available to politicians, the role of humans in AI’s relationship with governance, and the values guiding the design of these technologies.