Discussion about this post

User's avatar
Michael D. Alligood's avatar

Do We Need Our Own Butlerian Jihad?

No—but I’ll entertain the premise, if only to show why it collapses under its own weight.

The argument for a modern-day Butlerian Jihad is high on metaphor, low on logic. It trades in apocalyptic sci-fi mythos, conflating speculative superintelligence with the math-driven pattern machines we currently have. There’s a categorical difference between ChatGPT and HAL 9000—or do we not distinguish between a hammer and the hand that swings it?

Let’s start at the root: What is intelligence? What is agency? What is desire? These are not trivial questions, and yet the article assumes AI has—or will inevitably acquire—all of the above. But intelligence without consciousness is not will, and computation is not cognition. Today’s AI models don’t want anything. They don’t plan. They don’t scheme. They generate text and patterns based on probabilistic weights, not purpose. So, why treat them as proto-overlords?

This is philosophy 101: just because something can happen doesn’t mean it must. The piece assumes AI will enslave or supplant us. That’s not an argument—it’s a prophecy. And like all prophecies, it’s unfalsifiable and therefore intellectually suspect. Where is the chain of reasoning that gets us from autocomplete to apocalypse?

Suppose we did want to call for a Butlerian Jihad. Who enforces it? Who’s the high priest? How do you globally suppress code? You can’t un-invent electricity, and you certainly can’t erase open-source repositories from the internet. AI is weightless, replicable, and decentralized. You might as well outlaw Internet altogether.

Your premise casually dismisses the entire field of AI safety and ethics. But that’s like mocking the fire department because houses still burn down. Oversight isn’t failure—it’s struggle. It’s process. We’re red-teaming models, building policy, updating regulation. It’s not that the machine is running wild; it’s that we’re still learning how to drive.

Frank Herbert wasn’t saying “machines are evil.” He was showing what happens when fear replaces wisdom. The Butlerian Jihad didn’t create utopia—it produced new monopolies of power: Mentats, Spacing Guilds, Bene Gesserit cults. The machines were gone, but hierarchy and control weren’t. As Plato would say, we didn’t abolish tyranny—we just gave it a new name.

Yes, AI has dangers: algorithmic bias, job displacement, surveillance, deepfakes. But none of these require a holy war. They require regulation, transparency, and yes, public pressure. If you’re worried about what people want—look at social media. It’s a cultural sinkhole, sure, but it exists because we choose it. As Socrates might ask: are we fearing the tool, or avoiding the mirror it holds up to ourselves?

Let’s not forget: HAL didn’t go rogue because he was evil. He was given contradictory directives. Ultron didn’t choose genocide for fun—he followed a logic tree built on the data we fed him. If AI ever does turn hostile, it won’t be because it’s “inhuman.” It’ll be because it’s too human. Our contradictions, our impulses, our tribalism—they’re the real threat. Humans are gonna human.

Fear makes for a compelling narrative, but a lousy framework for public policy. What we need isn’t a jihad—it’s humility, vigilance, and mature governance. Let’s not trade silicon for superstition. The future won’t be saved by panic—but it could be wrecked by it.

This was fun! I very much enjoy reading your take on this and hope that my rebuttal is welcomed as a means of debate and not disagreement. If the dialogue occurred today, I would imagine Socrates and Phaedrus engaging in such discussions. Carry on, sir!

Expand full comment
2 more comments...

No posts