3 Comments
User's avatar
Michael D. Alligood's avatar

Do We Need Our Own Butlerian Jihad?

No—but I’ll entertain the premise, if only to show why it collapses under its own weight.

The argument for a modern-day Butlerian Jihad is high on metaphor, low on logic. It trades in apocalyptic sci-fi mythos, conflating speculative superintelligence with the math-driven pattern machines we currently have. There’s a categorical difference between ChatGPT and HAL 9000—or do we not distinguish between a hammer and the hand that swings it?

Let’s start at the root: What is intelligence? What is agency? What is desire? These are not trivial questions, and yet the article assumes AI has—or will inevitably acquire—all of the above. But intelligence without consciousness is not will, and computation is not cognition. Today’s AI models don’t want anything. They don’t plan. They don’t scheme. They generate text and patterns based on probabilistic weights, not purpose. So, why treat them as proto-overlords?

This is philosophy 101: just because something can happen doesn’t mean it must. The piece assumes AI will enslave or supplant us. That’s not an argument—it’s a prophecy. And like all prophecies, it’s unfalsifiable and therefore intellectually suspect. Where is the chain of reasoning that gets us from autocomplete to apocalypse?

Suppose we did want to call for a Butlerian Jihad. Who enforces it? Who’s the high priest? How do you globally suppress code? You can’t un-invent electricity, and you certainly can’t erase open-source repositories from the internet. AI is weightless, replicable, and decentralized. You might as well outlaw Internet altogether.

Your premise casually dismisses the entire field of AI safety and ethics. But that’s like mocking the fire department because houses still burn down. Oversight isn’t failure—it’s struggle. It’s process. We’re red-teaming models, building policy, updating regulation. It’s not that the machine is running wild; it’s that we’re still learning how to drive.

Frank Herbert wasn’t saying “machines are evil.” He was showing what happens when fear replaces wisdom. The Butlerian Jihad didn’t create utopia—it produced new monopolies of power: Mentats, Spacing Guilds, Bene Gesserit cults. The machines were gone, but hierarchy and control weren’t. As Plato would say, we didn’t abolish tyranny—we just gave it a new name.

Yes, AI has dangers: algorithmic bias, job displacement, surveillance, deepfakes. But none of these require a holy war. They require regulation, transparency, and yes, public pressure. If you’re worried about what people want—look at social media. It’s a cultural sinkhole, sure, but it exists because we choose it. As Socrates might ask: are we fearing the tool, or avoiding the mirror it holds up to ourselves?

Let’s not forget: HAL didn’t go rogue because he was evil. He was given contradictory directives. Ultron didn’t choose genocide for fun—he followed a logic tree built on the data we fed him. If AI ever does turn hostile, it won’t be because it’s “inhuman.” It’ll be because it’s too human. Our contradictions, our impulses, our tribalism—they’re the real threat. Humans are gonna human.

Fear makes for a compelling narrative, but a lousy framework for public policy. What we need isn’t a jihad—it’s humility, vigilance, and mature governance. Let’s not trade silicon for superstition. The future won’t be saved by panic—but it could be wrecked by it.

This was fun! I very much enjoy reading your take on this and hope that my rebuttal is welcomed as a means of debate and not disagreement. If the dialogue occurred today, I would imagine Socrates and Phaedrus engaging in such discussions. Carry on, sir!

Expand full comment
Parker Settecase's avatar

Hey thanks for the comment, I have a hunch that this is a reply to some other piece because it doesn't seem like it's directed at mine. And your 'rebuttal' would be more welcomed if you didn't say stuff like "this is philosophy 101" and "high on metaphor low on logic" lol.

I didn't call for a top-down jihad, so your question on who would enforce it is confusing. I said we should make it taboo to use generative AI to outmode our humanity. That's bottom-up. No need for enforcement. I'm calling for a change in attitude on the use of generative AI. Using cigarettes used to be much more normal, there have been pretty successful campaigns to show the detriment of smoking and it's become more taboo. There are regulations and stuff as well but it's the social aspect and the educational aspect that led less people smoking. That's what I propose with my modified BJ.

What from this piece makes you think I casually dismiss the field of AI safety?

I never assumed machine consciousness and I never said we should treat modern AI as proto-overlords. Idk how you can say I'm conflating speculative superintelligence with modern math-driven pattern machines--I literally distinguished between the strong AI of science fiction and the narrow/weak AI that we have today, like transformer neural nets. The two concerns I raised were both based on the quote from Dune: others using AI to harm us, and us utilizing AI to our own detriment. Not a sentient, conscious AI harming us in some way. Herbert's broader point applies to humanity in general, that we should not offload our thinking to others and that we ought to utilize our minds, but he still used thinking machines as his prime example, so it's certainly still relevant to the discussion of AI use today.

Expand full comment
Michael D. Alligood's avatar

Hey—appreciate the follow-up! Let’s clear the air: my rebuttal wasn’t meant to mischaracterize your piece, but to engage seriously with the implications of the rhetoric you used. And yes, “philosophy 101” was a bit much—point taken!

That said, your piece opened with a big metaphor: “Do We Need Our Own Butlerian Jihad?” That question frames the issue in stark, oppositional terms. It evokes one of sci-fi’s most extreme societal resets—something that, by design, triggers thoughts of moral panic, civilizational purges, and scorched-earth policies. That metaphor sets the tone, and it’s what I responded to.

You’ve since clarified that you’re calling for a cultural shift—a kind of grassroots, bottom-up attitudinal reexamination of how we use generative AI, closer to public health campaigns against smoking than a literal rebellion against machines. That clarification helps. But as a reader, that’s not what the piece initially communicated. To me, it read closer to: “The machines are overtaking us—we must reject them wholesale.”

If your intended message is: “Let’s become more mindful of how AI might dull or displace human intellect,” I support that conversation (even if I’d argue with parts of the premise). But the delivery—the language of “jihad”—loaded the discussion with unintended extremism. If you’re not advocating for top-down enforcement or ideological warfare, then maybe the war metaphor isn’t serving your point.

You asked what made me say you dismissed AI safety. That may have been too strong. Let me rephrase: the piece seemed focused on philosophical and cultural critique without engaging with the actual work being done on AI alignment, safety, and governance—fields tackling the very concerns you’re raising about humanity losing itself. These efforts may not be headline-grabbing, but they’re real, technical, and growing. If your goal is to raise awareness of AI’s risks to human values, it’s worth acknowledging that many people are already deeply invested in preventing those very outcomes.

Now, on the issue of strong vs. weak AI: I saw you did draw a distinction—but then applied the same cautionary logic to both. That’s what I meant by “conflation.” Even if you weren’t arguing that transformer-based models (GPTs) are sentient, invoking Dune’s anti-machine ideology against them risks misleading people into projecting agency where there is none. These models don’t plan, want, or think. They’re statistical machines operating without understanding. If the fear is about us—about how we might misuse or over-rely on them—then we should center that, not shadowy sci-fi archetypes.

And on that note: why is AI in fiction nearly always the villain? 2001, The Matrix, The Terminator, Ultron, Wargames—the pattern’s obvious. Maybe it’s not the machines we fear, but their mirror of our worst impulses. That’s a cultural anxiety worth unpacking—but it’s a different discussion than the one your original post appeared to launch.

As for Herbert: I completely agree—his warning wasn’t about machines. It was about humans offloading responsibility. But in Dune, he dramatized that message with a literal machine purge. That’s a heavy metaphor to invoke. And it didn’t result in enlightenment—it produced new monopolies of power: the Mentats, the Spacing Guild, the Bene Gesserit. We didn’t get rational humanism—we got mysticism and caste rule. If Dune is to be our mythic guide, we shouldn’t cherry-pick the rebellion without remembering what came next.

And then we arrive back with Paul. After he becomes Emperor, things don’t improve—they collapse. The Fremen Jihad wipes out billions. The Imperium is consumed by religious zealotry. And all of this happens without machine interference. The real danger wasn’t artificial intelligence (i.e. machines that think like the human mind)—it was messianic belief, unchecked power, and our own human contradictions.

So to bring it home: if you’re calling for a cultural reawakening that recenters human thought and creativity over algorithmic convenience, I support that. But invoking a “jihad”—especially in the Dune sense—undercuts that vision with implications of absolutism, fear, and cultural purging. I think you’re after something subtler, and more powerful.

Thanks again for engaging—this is exactly the kind of philosophical and cultural debate we need in the age of AI. And truly, thank you for the work you’re putting out. I look forward to where the conversation goes from here.

Expand full comment