We've Seen This Movie Before: What History Tells Us About AI's ‘Inevitability’ Argument

Neal Mohan, CEO of Youtube, and Mark Zuckerberg, CEO of Meta, in a color gradient background

Last week, juries in two states held social media companies accountable for harm caused to children. That shift did not happen overnight. It followed a pattern that has repeated across industries for over a century, and it tells us what is coming for AI.

On March 24, a New Mexico jury ordered Meta to pay $375 million for knowingly harming children's mental health and concealing what it knew about child exploitation on its platforms. The next day, a Los Angeles jury found both Meta and YouTube negligent for designing platforms that addicted and harmed a young woman who began using them as a child. Experts immediately called it Big Tech's “tobacco moment.”

The damages were modest, but the significance was not. For years, the social media industry ran a familiar script: the science is not settled; parents should manage screen time; regulation would be impractical; we are already investing in safety tools; and you cannot regulate the internet state by state. Last week, two juries rejected that script entirely – not by responding to the industry's arguments, but by addressing a story: a plaintiff with a name, and internal documents showing that the companies knew the damages.

That is not the first time an industry's inevitability script has collapsed – it is not even the tenth. From tobacco to auto safety to child labor, the pattern is remarkably consistent. Powerful industries deploy the same set of narrative moves to delay accountability. The moves work, sometimes for decades. And then, every time, they lose. Not to better policy arguments alone, but to policy arguments built on better stories.

The pattern is now directly relevant to AI. On March 20, five days before the verdicts, the White House released its National Policy Framework for Artificial Intelligence, a four-page document deploying every move in the same script: AI is too complex to regulate, child safety is a parental responsibility, slowing down means losing to China, industry should set its own standards, and federal preemption should override stronger state protections. The framework presents AI as an inexorable force to be harnessed, not a set of products built by companies making choices. It never asks whether the system it proposes to build is fair.

That is not an argument against AI. The technology will continue to develop. The argument is about who gets to decide how it is built, and whether the public has a voice in that process. The social media verdicts show us what happens when the public finally gets that voice. History shows us that this moment always arrives. The only question is how long it takes, and how much damage accumulates before it does.

The Script: Five Narrative Moves That Repeat Across Industries

Across tobacco, auto safety, child labor, and now AI, the same five communication tactics surface whenever an industry attempts to outrun public accountability. They are not always deployed cynically; sometimes the people using them believe their own arguments. But the pattern is consistent enough to be worth naming.

“It's too complex for you to understand.” The first move manufactures technical complexity until the public feels unqualified to hold an opinion. The tobacco industry sustained this for decades with “the science isn't settled.” Social media companies argued that the relationship between their platforms and adolescent mental health was too complex for any single study to resolve. Meta's spokesperson responded to last week's verdict with the same move: “Teen mental health is profoundly complex and cannot be linked to a single app.” The AI framework deploys a structural variant: AI development is “inherently interstate” and too complex for fifty regulatory regimes. The effect is always identical: it communicates to the public that this is not their conversation to have.

“It's your responsibility, not ours.” The second move shifts the burden of safety from producer to consumer. For decades, the auto industry maintained that crashes resulted from driver error. As Ralph Nader documented in Unsafe at Any Speed, driver error had been the sole framework for investigating traffic accidents long before anyone examined whether the vehicles themselves were designed dangerously. Social media companies insisted that parents should monitor their children's screen time rather than asking whether the platforms were engineered to be addictive. The AI framework performs a similar inversion: it positions child safety as a parental responsibility (“empowering parents”) while explicitly shielding the developers from liability. Last week's verdict directly rejected this logic: both juries found that the platforms themselves, not the users, bore responsibility for the harm. This move operates differently from the others. The rest buy time, while this one reassigns blame.

“If we slow down, the enemy wins.” The third move manufactures geopolitical urgency, making regulation appear reckless. In the early twentieth century, industrialists argued that abolishing child labor would collapse the textile and mining industries. The framework's equivalent: if the United States regulates AI, China wins. The document frames its entire rationale through “ensuring American AI dominance” and warns that regulation would “undermine American innovation and our ability to lead in the global AI race.” The emotional structure is unchanged across a century: act now, or the enemy overtakes you. But as journalist Karen Hao has argued, the framing of a US-China “AI arms race” is itself misleading and politically driven, a narrative constructed to foreclose the possibility that AI can be developed both competitively and safely. 

“Let us handle it.” The fourth move advocates for self-regulation in place of external accountability. The tobacco industry funded its own research through the Tobacco Industry Research Committee, producing studies engineered to delay regulation and sustain doubt. A 1969 tobacco industry internal memorandum stated the objective plainly: “Doubt is our product since it is the best means of competing with the 'body of fact’ that exists in the mind of the general public.” Social media companies pointed to their own safety tools and content moderation teams as evidence that external regulation was unnecessary. The framework calls for “industry-led” standards and proposes “regulatory sandboxes” that would exempt companies from existing rules. As Scott Galloway has observed, tech “is the only industry in history of this size or importance that has almost zero regulation.” The industry's advocacy for self-governance is not altruism. As he wrote in 2023, “The techno-catastrophists want to create a narrative that the shit coming down the pike is not the result of their actions, but the inevitable cost of progress.”

“Don't let locals mess this up.” The fifth move preempts local regulation by arguing that only centralized authority can govern effectively. This is the framework's most consequential provision: it seeks to override the fifty states with a single federal standard that, by design, is lighter than what many states have already enacted. Robert Weissman, co-president of Public Citizen, has described it as “a hollow document with only one tough and meaningfully binding provision, delivering Big Tech’s top policy priority: It aims to preempt all state laws and rules dealing with AI.”  But the social media verdicts illustrate the flaw in this strategy. It was state-level action, the New Mexico attorney general's office, and a Los Angeles County courtroom that finally held the platforms accountable. The laboratories of democracy produced the accountability that federal inaction had not.

These five moves constitute a delay strategy. And delay strategies work, as long as the public is not paying attention. The industries deploying them have the budget and media access to sustain the script without building broad public support. They do not need cultural salience. They need its absence. The script holds the silence, but it breaks when someone fills it with a story the public recognizes as its own.

Tobacco companies postponed meaningful regulation for over thirty years. Social media operated with virtually no accountability for fifteen years. Last week, two juries signaled that social media's silence is over. The question for AI is whether the same script will hold for years or whether the counter-narrative arrives faster this time. The framework is an attempt to codify the rules before the narrative can form. The White House wants Congress to convert it into legislation this year, before the midterm elections. If it succeeds, the resulting law will be far harder to amend than it would have been to prevent.

Why the Script Always Fails

The script does not fail because its policy arguments are weak. It fails because its communication strategy contains a structural vulnerability: it works only as long as the public is not paying attention.

The inevitability script is never defeated by counter-arguments. It is defeated by counter-narratives, stories that achieve cultural salience by meeting three conditions simultaneously: prevalence, resonance, and emotional imprinting. When a narrative is everywhere, when it connects to something people already feel, and when it lodges emotionally in memory, the script's hold over the public conversation collapses.

The script operates on borrowed time. It holds for as long as the issue remains technical and impersonal. The moment someone provides the public with a narrative that makes the issue concrete, personal, and emotionally urgent, the conditions for the script's collapse begin to assemble.

But the collapse is neither automatic nor clean. History shows that the script breaks through a specific mechanism: a small, concrete, emotionally legible incident ignites public concern that already existed but lacked an outlet. The concern is the dry tinder; the incident is the spark.

In 1906, Upton Sinclair published The Jungle, a novel about labor exploitation in Chicago's meatpacking industry. The public already harbored diffuse distrust of the food supply. Sinclair's visceral, specific account of contaminated meat gave that distrust a focal point. Congress passed the Pure Food and Drug Act within months. 

In 1911, 146 workers, most of them poor immigrant women, died in the Triangle Shirtwaist Factory fire in New York. The initial demands were modest: fire exits that opened outward, unlocked doors during working hours. Those demands became the template for more than thirty workplace safety laws within three years, and went on to influence federal OSHA standards half a century later. 

In 1978, Lois Gibbs, a mother in Love Canal, New York, began organizing after her son developed serious health problems at a school built over chemical waste. Local officials dismissed her coalition. Two years later, the federal government relocated 833 families, and Congress passed the Superfund Act. 

The tobacco industry's script held for over three decades. It broke not because the science improved, but because of perceived betrayal. When Jeffrey Wigand disclosed internal industry practices and documents revealed that the companies had known about the harms for decades, the public did not respond to the data. They responded to the deception. The script “failed” in the sense that regulation eventually arrived. But the delay was not a side effect; it was the objective. The industry profited from every year the narrative held.

The auto industry's script broke when General Motors hired private investigators to surveil Ralph Nader after the publication of Unsafe at Any Speed. The public had largely ignored the book. GM's overreach transformed the story from a technical argument into a personal scandal, generating the kind of emotional engagement that a policy report never could. The vehicles did not change, but the narrative did. These moments served as public-relations disasters for the industries involved and as catalysts for advocates positioned to capitalize on them.

In every case, cultural salience created the opening for political action. It did not replace the organizing, coalition-building, and legislative work that followed. But without the spark, the reform did not begin.

The Proof-Case: From Narrative Shift to Courtroom

If the script collapses when cultural salience arrives, the question becomes practical: what does it look like when cultural salience actually builds against an entrenched industry narrative?

For years, the tech industry deployed all five script moves on smartphones and adolescent mental health. The evidence was characterized as mixed (complexity). Parents were told to manage their children's screen time (responsibility transfer). Phone bans were dismissed as impractical (urgency framing). The industry pointed to its own digital wellbeing tools (self-regulation). And state-level regulation was resisted as fragmented (preemption of local action). 

The shift did not begin with a single book. The ground had been prepared for years. The Social Dilemma documentary brought the mechanisms of addictive design to a mass audience. The Cambridge Analytica scandal exposed how platform data could be weaponized. Parents and teachers could see the effects of social media on children with their own eyes. The concern was already widespread. What Jonathan Haidt's The Anxious Generation provided was the relentless, compelling narrative that organized all of it into a single, culturally salient argument.

It achieved prevalence: bestseller, front-page coverage, Congressional testimony, Davos, NPR, every parenting group chat in the country. It achieved resonance: every parent already sensed the problem, and the book supplied language for an experience they recognized from their own households. And it achieved emotional imprinting: Haidt framed the issue around puberty, making the harm feel physical, developmental, and irreversible.

The legislative response was rapid. Within 18 months of the publication of Haidt's book, over thirty states and Washington, D.C., had enacted laws restricting phone use in schools. And last week, the narrative reached the courtroom. Two juries held Meta and YouTube accountable for harms their platforms caused to people. The script that had held for fifteen years broke within two days.

From bestseller to over thirty states, legislation took roughly eighteen months. Tobacco's script held for over thirty years. Social media's held for fifteen. The interval is compressing. But the compression is not automatic. It required years of accumulating evidence, multiple contributing moments, and finally a narrative that made it all cohere. The tech industry deployed all five script moves. Each one was neutralized not by a policy counter-argument, but by a story the public recognized as its own.

What This Means for Responsible AI

The social media verdicts demonstrate the pattern. The question is whether it will take another fifteen years to arrive at the same place.

Seismic's own research, On The Razor's Edge, and new data from Blue Rose Research suggest the ground is already shifting. AI is now the fastest-rising issue in political salience in the United States. Public concern has moved from the existential to the proximate: jobs, children, and mental health. And in our focus groups, people have already merged their anxiety about AI with their anxiety about the economy into a single grievance. They do not experience AI as a technology problem. They experience it as the latest expression of a system that does not work for them. Blue Rose Research confirms that 64% of Americans believe the system is rigged in favor of elites, a view that crosses partisan lines. The unifying emotion is not complicated. It runs deeper than that. It is not fair.

When leaders assert that AI will generate productivity gains for everyone, net public trust registers at -20. When they assert it will not cause widespread job displacement, trust drops to -41. The more emphatically elites offer reassurance, the less they are believed. The AI framework's language is not merely failing to reach the public; it is actually generating active resistance.

The Responsible AI community cannot afford to organize around a single issue. Child safety is the most emotionally accessible entry point, but public concern extends to jobs, economic security, and the fundamental question of who the system is being designed to serve. If the counter-narrative wins on children but concedes on developer liability, state preemption, and the distribution of AI's benefits, the framework achieves its objective. And the counter-narrative does not yet exist. Responsible AI has no equivalent of Unsafe at Any Speed, The Social Dilemma, or The Anxious Generation. It has no phrase that captures the problem in terms that anyone can repeat. That absence is the most urgent gap in the field.

Inevitability Is a Communication Strategy, Not a Fact

The language of inevitability does not describe a reality. Instead, it constructs one. Every industry that has faced public pressure to accept accountability has reached for this language. It is designed to make the absence of regulation appear natural and necessary. It has never held permanently.

Every “inevitable” industry practice, from child labor to leaded gasoline to unregulated tobacco, was once treated as a permanent feature of economic life. Each was eventually displaced by communication that made the invisible visible, the abstract personal, and the inevitable unacceptable. And in every instance, the emotion that ultimately broke the script was the same one now accumulating around AI: the conviction that the arrangement is not fair.

Tobacco's eventual “failure” consumed thirty years and millions of lives. The relevant question is never whether the script will lose. It is how quickly the counter-narrative arrives. Tobacco required thirty years. Social media held for fifteen. The social media verdicts last week arrived fifteen years after Facebook introduced the algorithmic feed. The phone-free school movement went from best-seller to over thirty state laws in fewer than two years. The interval is compressing when the communication strategy is deliberate, emotionally grounded, and directed at the public rather than at the policy community.

Last week, a plaintiff named Kaley sat in a Los Angeles courtroom and told a jury what social media had done to her. Internal documents proved the companies knew. That was social media's Triangle Shirtwaist moment. It took fifteen years to arrive.

AI is running the same script. The question is what AI's instigating moment will be. It may be the case of a teenager harmed by a chatbot. It may be the wave of layoffs that lands in a single community. It may be a document leak that reveals what the companies already know. We do not know what form it will take, but if the pattern holds (and it has held for over a century), it will come.

How AI develops, under what constraints, for whose benefit, and at whose expense: these remain open questions. The technology is inevitable. The governance is not. The emotion is already here, and it is among the most powerful forces in democratic politics. What remains absent is the narrative that gives it direction. And the legislative window in which the narrative can alter the outcome is open. For now. 

Seismic

Foundation

Seismic Foundation is a 501(c)(3) nonprofit
EIN 33-3325360