Terminator 2 - Judgment Day: Echoes
"It’s in your nature to destroy yourselves."
If we make bombs, they will make bombs, – Leo Szilard, American physicist, in 1945
The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. - Stuart Russell, British computer scientist, in 2015.
I’ve never been a crank, a paranoic. I don’t readily believe in conspiracies, and I’m not a fatalist. But I marvel how the human race can’t seem to resist skating along the edge of destruction as we again tamper with forces that we may not be able to control.
As I have written before on this site, I grew up a teen fully accepting the likelihood that the U.S. and U.S.S.R would ultimately go to war and blow up half the world –if not all of it. I didn’t have much faith in deterrance or Mutual Assured Destruction or any of the doctrines we were taught would protect us. My reasoning was simple: If you build a weapon, you’ll probably end up using it. (Szilard, quoted above, agreed.)
Thankfully, my conclusion was not only simple but simplistic. Nations with nuclear weapons have, with one very large exception, seemed to understand that there is no such thing as a tactical or limited application. It turns out they value self-preservation. But obviously, the United States wouldn’t be currently at war with Iran if there was a belief that all developers of nukes are rational actors. And the war demonstrates something else: We are as good, as a species, at killing each other as we have ever been, nuclear weapons aside. Much of that efficency of late has come through the use of lethal drones, unmanned crafts with payloads that are increasingly growing autonomous.
The dustup between the Pentagon and AI company Anthropic has much to do with exactly how autonomous, how “free thinking” we want our killing machines to be. Anthropic made the not unreasonable request that its tech not be used to build autonomous weapons or spy on Americans. The Defense (sorry “War”) Department accused the company of tying its hands to “warfight” and labeled it a national-security risk.1
The controversy served the useful purpose of illustrating for the public just how embedded AI concepts are into modern weaponry. The U.S. launched “Project Maven,” which utilizes AI learning to scan drone video for potential targets, almost a decade ago. Since then, the Pentagon has been moving toward weapons that can make target assessments more quickly than humans, while developing small, cheap drones that work collectively to share information and eliminate threats. Humans, the government argues, are still in the loop, are still the ultimate decision-makers, but the day is clearly coming when AI-driven drones are empowered to select and then destroy. The question is when we allow it to push its own button.
******
The Terminator franchise is part of our pop culture lore, so much so that “Skynet” has reentered the lexicon as an all-purpose nickname for AI systems that could potentially turn lethal toward humanity.
James Cameron’s 1991 sequel to his low-budget 1984 hit The Terminator was prescient in how explicitly it ties sentient AI to the development of the nuclear bomb. Oddly, the movie was made and set in the early 90s – after the Wall fell and as the U.S.S.R. was collapsing-- but Cameron was, at heart, a Cold War kid – and that provides the story’s frame of reference. He told Christiane Amanapour in an interview last summer that he was eight during the Cuban Missile Crisis. “I don’t think it ever left me,” he said.
“Notice was served to us as an intelligent species in 1945 that it was possible for us to destroy ourselves,” Cameron said. “I think we’ve lost touch with how powerful these weapons really are, what a threat they pose to us.”
Cameron didn’t seem much reassured that the Wall falling would mean the end of the nuclear threat. There’s a line in the T2 script in which young John Connor (Edward Furlong) asks whether the Russians would attack the U.S. since “they’re our friends now.” Be that as it may, the story asserts, if the U.S. were to launch a first strike (via Skynet), the Russians would have no choice but to respond in kind. That’s just as true, or more so, today.
To up the ante, Cameron stages a horrific scene in which Sarah Connor (Linda Hamilton, back and buff) dreams about kids on a playground dying in the fire of a nuclear blast, the burning skin flaying from their bodies. An original inspiration for the sequel, the director said, was Sting’s 1985 song “Russians” in which the artist sings, “I hope that Russians love their children too.” It’s also a fairly strong callback to the 1983 TV film The Day After, which depicted a strike on America.
But let’s not pretend this was some preachy antiwar film. Cameron layered his message within a propulsive thriller that features some of the most heart-stopping and spectacular chase scenes of all time, particularly an early sequence set along the L.A. River that involves Furlong on a motorbike and the T-1000 Terminator (Robert Patrick) at the wheel of a semi-truck. It’s a popcorn movie in the best way.
Cameron, who developed the screenplay with William Wisher, hit upon the dynamic that made the sequel click — take the Terminator from the first film and make him Connor’s protector. Then create a sleek, totally different kind of cyborg assassin as the new antagonist, one that can mimic humans.2
Those who hopefully were able to avoid spoilers walked into the theater perhaps expecting Schwarzenegger to terrorize poor Sarah all over again, but in the seven years since the first film, Arnold had ascended to full international stardom. He wasn’t playing villains any longer. He was the good guy.
Even so, Cameron plays with the audience’s expectations, setting up the conflict (almost wordlessly) through the first act so that you’re never quite sure whether the original recipe Terminator is after the young Connor. But once that has been made clear, the T-800 is given his own emotional arc, slowly becoming more human as the story progresses and bonding with John.
That gives the film some emotional heft. The ending is reminiscent of 1953’s Shane, directed by George Stevens, a Western that featured a gunslinger who protects a mother and child from hired thugs. (“Shaaaaaaane! Come back!”)
Indeed, it’s easy to forget now, 35 years later (f—k me!) what a massive hit T2 was. It smashed the box-office record for an R-movie opening and ending up making more than $500 million worldwide on a $100 million budget. (It was, at the time, the most expensive film ever made, and it remains Arnie’s most successful movie.) It was the top grossing film of 1991, outpacing Home Alone, Silence of the Lambs, and Dances with Wolves to name three other hits in what was a damn good year for movies. T2 was a phenomenon, one that had people all over the world saying “Hasta la vista, baby!” to each other in a phony Austrian accent.
But back to Skynet.
As the T-800 tells the story, the end of the world began in 1997, when AI embedded in Pentagon programs became self-aware and interpreted an effort to shut it down as an attack:
In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. . .
The idea, as related by Miles Dyson (Joe Morton), the creator of the system, had been to empower machine decision-making because AI’s processing speed outpaces that of humans. That can be critical in warfare. That was the argument in Cameron’s 1991 world, and it is the Defense Department’s argument now. There is an abiding belief now that AI will provide the winning edge not only in the conflict with Iran but some future war that only exists right now on Pentagon dream boards.
Perhaps mindful of that, the White House’s “AI Action Plan” from last year makes explicit that having AI superiority to China is a primary goal. The document’s title: “Winning the Race.”
Plainly, we’re in a new arms race – which now coexists with the old arms race, one that shows signs of revival. President Trump’s vision of foreign policy appears to lean toward powerful state actors divvying up spheres of influence. Those spheres will inevitably brush up against each other. At that moment, we may be in a position where AI fires the first shot, either on their side or ours. If that happens, which side will be able to assert control first and defuse the situation? The concern is that neither side will be able to, even if wants to.
Several movies examined on Nuclear Theater concern not just weaponry but systems, systems that can slip from human oversight — and not just in 80s larks like WarGames. That dynamic is expressed in films dating back to Fail-Safe and Dr. Strangelove or as recently as A House of Dynamite. While there has been a longstanding fear of an unprincipled meglomaniac triggering a nuclear conflict, the truth is that the human element has kept us alive.
I have recounted before the actions of Russian air defense officer Stanislav Petrov, who while on duty in September 1983 was informed by the Soviet early warning system that the U.S. had launched a surprise attack with multiple ICBMs. Petrov didn’t think the warning felt right— and rather than signal for retaliation, he reported it as a false alarm. He has been credited for saving the planet—although naturally the story is more complicated.
Even as they assembled their nuclear stockpiles, both the U.S. and U.S.S.R. established systems embedded with off-ramps, giving the principles breathing room to reach the correct decisions. The development of autonomous weapons greatly threatens that model, processing information and acting so quickly that their human minders will have no chance to intercede.
Already, the pace of the Iran war is like nothing that has come before it. The Eurasia Review called it the”first AI war”:
There has been unprecedented use of AI-driven assets as Decision Support Systems (DSS), not merely as secondary analytical tools but as active enablers of kill chains. Typically, the process of gathering intelligence, identifying targets, conducting simulations and damage assessments, performing predictive analyses, assigning weapons, and executing missions took weeks, if not months, of human deliberation. However, the current war has seen attacks executed faster than ‘the speed of thought’, exemplified by the US conducting almost 900 strikes on Iranian targets in the first 12 hours alone and over 5,500 strikes in the first 10 days. The advantages of AI and drones, such as decision compression and low-cost saturation, have also proven to come with high human costs. The hyper-condensed decision-making cycle leaves little room for the human operator to cross-verify. Reliance on AI-accelerated decision-making, which is often plagued by outdated data and a lack of rigorous human verification, has direct implications for human casualties.
Szilard, the physicist quoted at the top of the piece, worked on the Manhattan Project. He doesn’t get the same treatment from history as does his contemporary Robert Oppenheimer, but he could just as clearly see the dangers of what technology had wrought. His greatest fear, however, was of the scientists, military leaders, and politicians who in their eagerness push forward past a point of no return. “The problem of our age is not the atomic bomb, but the heart of man,” Szilard said in 1947.
There will always be someone willing to build a better bomb. Or a faster processor.
If you’re looking for optimism, look to the final words of T2: The future remains unwritten. We have time to find our own off-ramps. Don’t take it from me. Listen to the Terminator himself, Schwarzenegger, in remarks he gave in a 2021 interview:
We are in charge. We don’t have to take this shit that’s coming our way. We can go and create a future, the one that we want, which is a good future without those machines. We just have to fight for it.
WHERE CAN I WATCH IT: T2 is available for rent on all major platform. Just ask Claude. (Some weak AI humor for you.)
HEY ISN’T THAT: Longtime character actor Joe Morton, who plays Skynet founder Miles Dyson, has joked that T2 was the only movie people associate with him. He’s also great in John Sayles’ Lone Star (1996), a film that features no robots. Also look for S. Epatha Merkerson, who played a police captain on “Law & Order” for about 43 years.
ARMAGEDDON INDEX: (9/10): The apocalyptic future foretold in the movie is averted in the final act. But the sequels (none of which I have seen and none of which were helmed by Cameron) apparently undoes much of that, sadly.
DUST CLOUDS: Mary L. Cummings, a professor at George Mason University, looked at self-driving cars in San Francisco as potential indicators of how reliable autonomous weapons systems might be. Her study found half of autonomous vehicle accidents were caused by "phantom braking.” The vehicles often mistook non-existent obstacles for real ones, leading to abrupt braking and rear-end collisions by human-driven vehicles. Her conclusion was that autonomous weapons may suffer similar hallucinations with deadly consequences.
TOP OF THE POPS: The No. 1 song on the Billboard chart in July 1991 when the film was released was “Rush Rush” by Paula Abdul. Unclear if any animated cats were involved in that one.
WHAT ELSE I’M WATCHING: TV: Rooster (S1, HBO Max), The Pitt (S2, HBO Max), The Fall and Rise of Reggie Dinkins (S1, Peacock); Movies: Margin Call (2011, Chandor). Sinners (Coogler, 2025), Vera Cruz (Aldrich, 1954).
LAST ENTRY: The Dead Zone (1983)
NEXT ENTRY: Moscow on the Hudson (1984)
A California federal judge last week issued a temporary injuction preventing the Pentagon from labeling Anthropic a supply-chain risk. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” District Court Judge Rita F. Lin wrote.
Cameron reportedly considered casting Billy Idol as the upgraded Terminator. That would have been. . . a choice.






Great review. Just to say, you make this comment “But obviously, the United States wouldn’t be currently at war with Iran if there was a belief that all developers of nukes are rational actors.” I’ll offer a counterpoint. The reason Israel and the US don’t want Iran to have a nuke has nothing to do with Iranian leader rationality. They just don’t want to be deterred from attacking Iran whenever the mood strikes. Who’s really the irrational actors here?
I think I meant to say that Sayles is underappreciated.