In 1956, MGM released a science fiction film that most audiences took as a clever updating of Shakespeare’s *The Tempest*—a story about a brilliant castaway, his innocent daughter, and the sailors who disturb their isolation. The film was *Forbidden Planet*, and it starred Walter Pidgeon as Dr. Morbius, a philologist who had unlocked the secrets of an extinct alien civilization. It featured Robby the Robot, who became an icon. It had a flying saucer and ray guns and a beautiful girl in short skirts.
What it also had—buried beneath the 1950s Technicolor spectacle—was the most accurate prediction of where we’d find ourselves in 2025.
The Krell were a species that had evolved far beyond us. A million years ahead, Morbius explains to the visiting Earth crew. They had conquered disease, eliminated want, and turned their entire planet into a single machine. This machine consumed the interior of their world—7,800 thermonuclear reactors powering a system designed to do one thing: project thought into reality. Whatever a Krell imagined would materialize instantly, anywhere on the planet. They would become as gods, freed from all physical instrumentality.
The night the system came online, the Krell vanished. All of them. Overnight.
I saw this film as a teenager in the early 1960s, and what stayed with me wasn’t the monster or the romance or Robby’s comic relief. It was that image: a civilization so advanced it built a machine to give itself unlimited power, and that machine erased them from existence before they could understand what they’d done.
Sixty years later, I follow world events on X and see the same story unfolding in real time.
The race is on. The United States, China, and Russia are locked in a competition that makes the Cold War arms race look quaint. The prize isn’t territory or resources in any traditional sense. The prize is the machine—the system that will monitor, predict, and control human behavior at a scale never before possible.
They don’t call it that, of course. They call it artificial general intelligence. They call it innovation. They call it inevitable.
What they’re really building is a control grid.
Consider what’s required to run these systems. The data centers proliferating across the American landscape consume more electricity than many countries. A single AI query uses ten times the energy of a Google search. Microsoft, Google, and Amazon are racing to secure power sources—any power sources—to feed the machine’s appetite.
And here’s where the hypocrisy becomes impossible to ignore.
For thirty years, we’ve been lectured about carbon footprints. We’ve been told to drive smaller cars, eat less meat, turn down our thermostats. School children have been terrorized with visions of flooded cities and extinction. Entire industries have been strangled with regulations designed, we were told, to save the planet from the catastrophe of climate change.
Now watch what happens when the machine needs power.
Three Mile Island—the very name synonymous with nuclear disaster in the American imagination—is being restarted to feed a Microsoft data center. Decommissioned plants across the country are being evaluated for resurrection. New nuclear capacity is being fast-tracked with an urgency that would have been unthinkable five years ago. The same voices that shrieked about the existential threat of carbon emissions have gone suddenly, conspicuously silent.
Because the machine must be fed.
The climate emergency, it turns out, was never really about the climate. It was about control—about establishing the principle that human activity must be monitored, measured, and restricted by centralized authority. Now that a more effective control mechanism is within reach, the old narrative can be quietly retired. The carbon footprint of AI is irrelevant. The water consumed by cooling systems—millions of gallons daily—doesn’t matter. The environmental impact of mining rare earth minerals for chips and servers can be ignored.
The Krell didn’t worry about the environmental impact of hollowing out their planet either. They had bigger plans.
What are those plans, exactly?
Follow the infrastructure and you’ll see the shape of what’s coming. The data centers are one piece. The 5G networks blanketing urban areas are another. The push for Central Bank Digital Currencies—money that exists only as entries in a government database—is a third.
Put them together and you get a system of total surveillance and control that would have been unimaginable a generation ago.
Your money becomes a permission slip. Spend it on approved goods and services, and all is well. Deviate from acceptable behavior—buy too much gas, travel too far from home, express the wrong opinions—and your account can be frozen, limited, or deleted entirely. No judge, no jury, no appeal. Just an algorithm deciding you’ve become a risk.
The fifteen-minute city isn’t urban planning. It’s a cage dressed up in the language of convenience and sustainability. Everything you need within a short walk—and everything else increasingly inaccessible, requiring permissions and justifications and carbon budgets that will be tracked by the machine.
This is what the three great powers are racing to build. Not for their citizens. For their subjects.
The optimists will tell you that America is different. That our traditions of liberty and limited government will constrain how this technology is deployed. That we’re the good guys.
Watch what we do, not what we say.
The United States has spent the post-Cold War decades as the world’s bully, convinced of our own righteousness while we topple governments, impose sanctions that starve civilians, and wage endless wars against enemies that somehow multiply with every bomb we drop. We’ve constructed a global financial system designed to enforce our will on any nation that steps out of line. We’ve turned the dollar into a weapon.
Russia and China aren’t racing to build AI because they’re comic book villains. They’re racing because they’ve watched what America does to countries that can’t defend themselves. They saw what happened to Libya, to Iraq, to Syria. They see what we’re enabling in Gaza right now—the systematic destruction of a civilian population with American weapons and American diplomatic cover.
They’re not building the machine to oppress their people. They’re building it because we’ve made clear that any nation without comparable power will eventually find itself in our crosshairs.
The AI race is an arms race, and like all arms races, it accelerates because no one can afford to fall behind. The tragedy isn’t that one side is evil and the other good. The tragedy is that the logic of competition makes the outcome inevitable regardless of anyone’s intentions.
The Krell, Morbius explains in the film, had evolved beyond us in every way. They were peaceful. They had eliminated crime and war and want. They were, by every measure we’d recognize, good.
It didn’t save them.
Their great project—the machine that would free them from physical limitations—worked exactly as designed. The flaw wasn’t in the engineering. The flaw was in the assumption that consciousness could be trusted with unlimited power.
The Krell forgot about the subconscious. The buried drives, the repressed fears, the ancient hungers that evolution had wired into them long before they became civilized. When the machine came online, it didn’t just project their conscious intentions. It gave form to everything beneath—the monsters from the id, as the film famously calls them.
They were destroyed by their own shadows.
I’m not predicting that AI will become sentient and turn on us. That’s the Terminator narrative, and it misses the point entirely. The danger isn’t that the machine will develop its own will. The danger is that it will serve our will—all of it, including the parts we don’t acknowledge.
What happens when a system designed to predict and control human behavior is deployed by governments that view their own citizens as potential threats? What happens when the tools built to fight terrorism are turned on domestic dissent? What happens when the algorithm decides that your opinions constitute a risk factor?
We don’t have to imagine. We can watch it happening.
The censorship regimes that emerged during COVID were a beta test. The social credit systems being refined in China are a beta test. The financial deplatforming of individuals and organizations who hold unfashionable views is a beta test.
The full system isn’t online yet. But it’s coming. And the three great powers are racing to complete it because whoever controls the machine controls the future.
There’s a scene near the end of *Forbidden Planet* where Morbius finally confronts what he’s done. His unconscious mind—amplified by the Krell technology—has been murdering the people who threaten his isolation. He didn’t know. He didn’t intend it. But the machine gave his hidden fears the power to kill.
“My evil self is at that door,” he says, “and I have no power to stop it.”
That’s where we are. The machine is being built. The infrastructure is going in. The legal and financial frameworks are being established. And the people building it tell themselves they’re doing it for good reasons—for security, for efficiency, for progress.
They don’t see the monster they’re creating because it’s made of their own shadows.
The film ends with Morbius triggering the self-destruct sequence that will obliterate the Krell machinery and the entire planet. As the survivors flee toward Earth, Commander Adams reflects that perhaps in a million years, humanity will reach the heights the Krell attained—but hopefully with wiser understanding of their own natures.
That was the optimistic Hollywood ending, required by the studio system of 1956.
I don’t think we get a million years. I don’t think we get a century.
The Krell vanished overnight. Their technology was so powerful that once it turned against them, there was no time to understand what was happening, no opportunity to shut it down, no possibility of survival.
We’re building the same machine. We’re hollowing out our civilization to power it—not the physical interior of a planet, but something more fundamental. We’re hollowing out privacy, autonomy, the very possibility of a life outside the system’s view. We’re building a control grid that will monitor every transaction, every movement, every communication. And we’re handing it to governments that have demonstrated, repeatedly, that they cannot be trusted with such power.
The Krell were a million years more advanced than us. They were wiser, more peaceful, more evolved.
It didn’t matter.
The machine doesn’t care about your intentions. It only amplifies what you are. And if the last century has taught us anything about human nature—about what governments do when they have power and what people do when they’re afraid—then we already know how this ends.
We just don’t want to look at it.
The Krell couldn’t look at it either. They built their great project, they congratulated themselves on their achievement, and sometime in the night, while they slept, the thing they had created erased them from existence.
I saw that movie when I was a teenager. I’m seventy-three now, and I’ve watched the warning play out in slow motion across my entire adult life. The machine gets bigger. The controls get tighter. The justifications get more elaborate.
And somewhere, in a data center humming with the electricity of a resurrected nuclear plant, the great project inches closer to completion.
The Krell knew something we don’t.
Or maybe they didn’t. Maybe that’s the point. Maybe the lesson of *Forbidden Planet* isn’t that advanced civilizations destroy themselves through hubris. Maybe it’s simpler and darker than that.
Maybe some machines, once built, cannot be unbuilt.
Maybe some doors, once opened, cannot be closed.
Maybe we’re not racing toward a future we can control, but toward an ending that was written the moment we decided to build the machine in the first place.
The Krell found out overnight.
We’ll find out soon enough.







