Some of the best philosophers of the 20th century were actually science fiction authors. I said it and I’m not going to formalize it into a syllogism and defend it. Why not? Because “truth suffers from too much analysis” – Ancient Fremen saying (Frank Herbert, Dune Messiah, 100.)
Some of my favorite philosophical sci-fi authors include Philip K. Dick, Isaac Asimov, and of course, Frank Herbert. There’s no surprise here, these are very popular sci-fi authors and the hipster in me is tempted to make a list of lesser known authors to show you how in-the-know I am, but I’ll save that for another time when I’m feeling more pretentious.
For now, I want to consider just a few sentences from Frank Herbert’s magnum opus, Dune, because in them Herbert gives us a profound and timely warning about the dangers of artificial intelligence which breaks from the usual “AI will wake up and kill us all” motif. If you haven’t seen my latest ParkNotes video on Herbert’s warning about AI, please do watch it, I have to let to the YouTube AI know that I have more ideas in my notebooks than just ideas about notebooks…
The passage I want to consider comes early on in Dune, right after the Reverend Mother, Gaius Helen Mohiam, is done testing Paul Atreides with the box of pain and the gom jabbar. Their conversation turns to the topic of freedom and the Reverend Mother says “Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” To which Paul replies with a quote from the Orange Catholic Bible (a fictional amalgamation of several holy books in the Dune universe): “Thou shalt not make a machine in the likeness of a man’s mind.” The Reverend Mother then replies, “Right out of the Butlerian Jihad and the Orange Catholic Bible…But what the O.C. Bible should’ve said is: ‘Thou shalt not make a machine to counterfeit a human mind.’”
Now I love this section so much because there’s an implicit warning for us today as we continue to cede and offload more and more of our thinking to machines. Herbert says don’t do this! Why not? Because the machines will become sentient and enslave or destroy all humans? No. But because other humans will use the AI systems that we increasingly rely on to do our thinking to enslave us. This AI powered enslavement happened in the Dune universe and it took a full-scale holy war, The Butlerian Jihad (an homage to Samuel Butler’s 1872 book Erewhon), to end it.
While most popular robot sci-fi stories deal with the AI control problem, Herbert’s warning (assuming that it is a warning and not just a background theme in Dune) is unique in that it focuses on the means of control problem.
So what is this AI control problem? According to Sven Nyholm, a professor of the Ethics of AI, the control problem is as follows:
“Simply put, the control problem is that as AI gets more and more autonomous and more and more powerful in its capabilities, the harder it will be to retain control over it.” (This is Technology Ethics: An Introduction, 96.)
(You can watch my full podcast episode with Dr. Nyholm on the ethics of AI below)
It’s this control problem which makes for lots and lots (and lots!) of interesting sci-fi and superhero stories. The machines become too powerful, and usually self-aware, and begin to control us!
Nyholm enumerates two potential solutions in the literature to the AI control problem:
(1) Motivational Control: wherein the AI architects bake in human values in order to serve as an indirect control over the AI system even if it becomes too powerful for us to control in other ways. (This is known as value alignment and raises “the alignment problem”, i.e., how do we make sure the AI is aligned to our values?; what are “our” values?; how many values ought we to bake in to the AI system?)
(2) Capability Control: put various limits on the AI system to keep it under control, which could include one of more of the following ideas:
i. Off-switch idea: build in an off switch as a safety measure so no matter how powerful the AI gets, you can turn it off at will.
ii. Boxing idea: “box in” the AI from the internet and other information sources, keep it from transmitting itself anywhere outside the AI lab where it is nice and controlled.
iii. Oracle idea: use another system, maybe another AI system, as a warning bell to let you know before the AI becomes too powerful for you to control any longer.
As it turns out, however, the sci-fi and superhero authors have already beat the AI philosophers to the punch on all of these proposed solutions to the control problem, creating wonderful stories wherein AI systems evade each of these preemptive measures. Check out my YouTube video for some examples:
Stories about the Control Problem are well-trodden territory. And it’s for this reason that I find Herbert’s Means of Control Problem to be so profound. Herbert isn’t warning us about the dangers of AI in and of itself. Instead, he’s warning us about the dangers of human nature. He’s warning us against our desires to offload out decision making and our reasoning processes. He’s warning us against the desire to control and manipulate other human beings. These temptations have always been with us, but now, with powerful tools like thinking machines, acting on those temptations at a massive scale has never been more possible and more alluring. Don’t do it!
Perhaps we should be a bit warier of LLMs and those companies rolling them out at break-neck speeds. Perhaps we should be more hands on in our own decision-making and rely less on computers. We don’t have to be complete Luddites, but maybe we should take a more minimalist approach to these emerging technologies rather than racing our neighbors to be the first with an AI chip in our skull. If so, maybe, just maybe we can leave the Butlerian Jihad in the Dune universe. Something to think about…
Good piece!
Yes! It's like what Aldous Huxley imagined in Brave New World, in that people are generally in danger of becoming slaves to pleasure. Thinking for ourselves and not always running from pain or boredom is imperative.