How to Spot an Android with René Descartes
Cartesian Limitations for Androids and Artificial Intelligence
When did the concept of ‘artificial intelligence’ begin? Who was the first to think of it? The actual term was coined by John McCarthy at the 1956 Dartmouth workshop, but Plato Stans[1] will probably say you can find the concept somewhere back in his works, no doubt.
But I’ve traced the concept at least as far back as 1637 in René Descartes’s A Discourse on the Method of Correctly Conducting One’s Reason and Seeking Truth in the Sciences—which is an unbearably long title for us today but is actually par for the course in those days. This Discourse on the Method was Descartes’s first publication and it was a stand-in for a book that would never come, The Universe. Descartes’s Parisian friends kept pressuring him to make his philosophical method and ideas public, but Descartes was reluctant, especially given his adopted motto, “he who remains well hidden, lives well”.[2] Not wanting to publish a full treatise on God, knowledge, natural philosophy, human nature, the world, philosophical method, and well… everything there is—Descartes abandoned The Universe project and instead placated his Parisian friends by giving them the Discourse. The short book is an introduction and brief overview of his unique philosophical and scientific program, his philosophical ‘method’. It’s a discourse but not a full-blown treatise on everything.
Animals and Automata
In part five of the Discourse, right after he gives his famous dictum: cogito ergo sum (I am thinking, therefore I exist), Descartes discusses the nature of the human soul and its creator, God. In this section, Descartes also considers animals in order to distinguish their natures from that of human persons and it’s here that he takes a brief excursus to touch on robots and artificial intelligence. This may seem like an odd excursus but Descartes believed that animals have “no mental powers whatsoever” and that they are more akin to analog clocks, which are wound up and are wholly subject to the laws of physics rather than the laws of reason.[3] Thus, Descartes claims that non-rational animals, those without a rational substantial soul (res cogitans), are actually automata themselves.
Descartes begins his discussion of artificial intelligence by noting that skilled engineers could build automata and “moving machines” which could accomplish the same tasks as a human worker but with far fewer moving parts. He notes that these same engineers could choose to make a moving machine which wholly resembles an animal and since both animals and moving machine animal replicas are both non-rational automata, “we would have no means of knowing that they were not of exactly the same nature”.[4] Presumably, Descartes means by outward appearance alone; this isn’t all too implausible, it’s actually a major theme in Philip K. Dick’s sci-fi novel, Do Androids Dream of Electric Sheep?, which was adapted into Ridley Scott’s 1982 movie Bladerunner.
Androids and Artificial Intelligence
But Descartes is concerned with the human mind, or soul, as res cogitans, as rational. So, arguendo, an animal and its replica moving machine may be indistinguishable, but what if someone were to engineer an automaton that looks just like a human being? Let’s call such an automaton an ‘android’. How could we know the difference between a genuine human and an android? Descartes provides us with two means of distinguishing between genuine humans and androids. I’ll break them down for us but first let me present the full quotation, because it’s fun to read Descartes in his own words:
“At this point I had dwelt on this issue to show that if there were such machines having the organs and outward shape of a monkey or any other irrational animal, we would have no means of knowing that they were not of exactly the same nature as these animals, whereas, if any such machines resembled us in body and imitated our actions insofar as this was practically possible, we should still have two very certain means of recognizing that they were not, for all that, real human beings. The first is that they would never be able to use words or other signs by composing them as we do to declare our thoughts to others. For we can well conceive of a machine made in such a way that it emits words, and even utters them about bodily actions which bring about some corresponding change in its organs (if, for example, we touch it somewhere else, it will cry out that we are hurting it, and so on); but it is not conceivable that it should put these words in different orders to correspond to the meaning of things said in its presence, as even the most dull-witted of men can do. And the second means is that, although such machines might do many things as well or even better than any of us, they would inevitably fail to do some others, by which we would discover that they did not act consciously, but only because their organs were disposed in a certain way. For, whereas reason is a universal instrument which can operate in all sorts of situations, their organs have to have a particular disposition for each particular action, from which it follows that it is practically impossible for there to be enough different organs in a machine to cause it to act in all of life’s occurrences in the same way that our reasons causes us to act.”[5]
Spotting Androids by Their Limitations
So Descartes thought that two limitations on androids and AI can help us distinguish an android from a human. I’ll formulate his limitations as follows:
(1) It’s not conceivable that an android can pass a proto-Turing test.
(2) Even if an android could initially pass a proto-Turing test, we’d discover that it’s an android soon enough due to the fact that the android is not generally intelligent.
Let me explain (1) really quick. A ‘Turing Test’ is a test given to a computer which is meant to help humans determine if that computer could really “think” or not. It’s named after Alan Turing, who proposed the test in his 1950 paper “Computing Machinery and Intelligence” in the journal Mind.[6] According to AI philosopher, Margaret Boden, Turing meant for the test to be more tongue in cheek than as a serious test for determining intelligence, let alone ‘sentience’ or phenomenal consciousness (qualitative experience; what-it’s-like-to-be-ness).[7] The test, according to Boden, “asks whether someone could distinguish, 30 percent of the time, whether they were interacting (for up to five minutes) with a computer or a person. If not, [Turing] implied, there’d be no reason to deny that a computer could really think.”[8] It’s usually suggested that the person conducting the test ought to be a psychologist or someone skilled at talking with people. Many will point to the prominence of Behaviorism in the philosophy of mind and psychology at the time as a proof that Turing was serious about his imitation game test for computer intelligence, but whether or not Turing meant this as a legitimate test is a subject for another time. I only bring up the Turing test in order to flesh out Descartes’s first means of spotting an android, his own proto-Turing test.
Descartes says that it’s conceivable that someone could engineer an android with preprogrammed sentences to correspond to its various movements, but he argues that it’s not conceivable that an android could have a meaningful conversation with you. An android, according to Descartes’s argument, could not, for instance, hear your question about a particular chair and form a wholly unique sentence, not previously programmed into it or drawn from its training data, and proceed to respond to you in real time like a human being would do. Thus, an android would fail Descartes embodied proto-Turing test. I use the term “embodied” because the Turing test is usually pitched with the machine in question hidden behind a screen or in another room or talking to the human tester via a digital interface, rather than through a face to ‘face’ dialogue. Now , the uncanny valley (the eerie sensation one gets when seeing a CGI face or a robot face that’s not quite real enough yet) is a very difficult obstacle to overcome and would be a dead giveaway for determining android from human today. But Descartes is envisioning a Philip K. Dickian future where we have androids that are genuinely physically indistinguishable from humans to the naked eye. Descartes claims that even in this type of scenario, it’s not conceivable that the android would be able to genuinely converse in real-time, and thus we’d be able to distinguish real from robot.
Now this is not a good argument. Descartes was a brilliant res cogitans, he saw way down the line into the world of robots, androids, and AI, so we have to give him credit. The man was a genius. He was just wrong about the his embodied proto-Turing test. Descartes thought that the robotics side of things was much easier to achieve than the chatbot side. As it turns out, he was exactly wrong about this. It looks like we probably have chatbots which can already pass the Turing test, and if not, they will soon. However, we do not seem to be nearly as close in creating an android which can pass as a human to the naked eye, that is, that can tranverse the uncanny valley. But if—probably more like when—AI engineers and roboticists build an android which can traverse the uncanny valley, it’s very conceivable that androids could fool us. Imagine an android with a ChatGPT model running on it, say something like a GPT-7—mind you, we’re only on GPT-4 and people are already freaking out, wondering if it’s ‘sentient’ (whatever that means)—imagine what a GPT-7 would be like! For sure that kind of android is fooling Descartes.
So, Descartes’s first means of distinguishing androids from humans is a bust, but we can’t fault him for not anticipating the transformer neural network revolution prompted by Google’s 2017 paper “Attention is All You Need” [9] nor the uncanny valley produced by robots like Sophia:
The Hard Problem of Artificial General Intelligence
Perhaps Descartes’s second means of spotting Androids will fare better though, recall the second limitation:
(2) Even if an android could initially pass a proto-Turing test, we’d discover that it’s an android soon enough due to the fact that the android is not generally intelligent.
Descartes motivates this second limitation of androids by making a distinction that is commonly made amongst AI theorists and philosophers today, that is, the distinction between narrow artificial intelligence and artificial general intelligence. A narrow AI is an AI system with a narrow scope of applicability, think ‘chessbots’. A chessbot, such as the open source chess engine Stockfish, is a program which is probably capable of defeating most, if not all, ‘natural’ human chess players in the world more than 30% of the time. But Stockfish is a narrow AI in that it can’t do your taxes for you, can’t operate a Tesla Cyber Truck, can’t write you an essay, or even play you in similar games like Go or Othello. It has a very narrow use case: digital games of chess.
An artificial general intelligence (AGI) on the other hand, is an AI which can successfully operate across various domains. Typically, AGI is said to be achieved when an AI can generalize across the same domains that the average human being can. AGI is kind of hard to define and it will depend on who you ask, but generally speaking, an AGI is meant to generalize beyond its training data to take a leap into the unknown the way humans do when we discover new paradigms or invent new ways of thinking. Weaver Weinbaum calls this “open-ended” intelligence, which is capable of self-organization but also self-transcendence. The intelligent system is capable of transcending its current parameters, and once doing so, it can re-organize itself and repeat again and again. According to AGI theorist Ben Goertzel—the guy who popularized the term ‘AGI’—, this open-ended intelligence picks out the core theme of general intelligence that the AI folks are searching for (check out my podcast episode with Ben here:
This open-endedness or take-all-challengers approach to intelligence also seems to be what Descartes has in mind in his second limitation. He acknowledges that there could be narrow AI-esque androids, with limited use cases wherein “such machines might do many things as well or even better than any of us”. But he argues that creating an AGI android which could operate with a human-level of sophistication in all the sorts of situations which human life presents to us, would be practically impossible. Descartes thinks this because he has something like ‘expert systems’ in mind, where the AI’s ‘knowledge’ is hand-crafted in minute detail by an expert in said area of knowledge. If AI engineers were limited to this flavor of AI, then perhaps AGI would be practically impossible after all, considering all that there is to know and how long it would take to program it in.
But with the dawn of self-learning deep neural networks, which can self-learn from a massive data set—like the whole internet!—an AI can learn in an “unsupervised” manner, that is, the engineers and programmers can just let the program play with unlabeled data for an unbelievable amount of time and at a breakneck speed. With built in reward systems in place, the AI’s good behavior, like correctly labeling a picture of a dog as ‘dog’, is reinforced Pavlov style. Descartes would have absolutely zero concept for this type of AI. An android that can teach itself? An android whose AI operating has ‘read’ every book that’s ever been published on the internet, every blog post, every subreddit and tweet and manic FB post that’s publicly accessible. It’s insane. But even the modern deep neural nets/transformer models of AI don’t quite count as the self-transcendent/self-organizing AI which Goertzel and many others would credit as AGI. So, Descartes wins? AI can’t generalize and thus we’d eventually discover the AI android isn’t a human being when they slip up in some particular domain of knowledge?
Not quite. First off, it’s not clear that AGI is practically impossible just from the fact that it hasn’t been accomplished yet. We’d need something stronger than ‘practical’ impossibility to rule it out. And note that I’m not talking about machine consciousness here. An AI which achieves general intelligence does not say anything about consciousness at all. Machine consciousness may be metaphysically impossible after all, but that doesn’t seem to be the case with AGI. So just because it’s impractical for a Descartes-era engineer, doesn’t mean it’s not eminently practical for the AI engineer of the late 21st or 22nd century. Secondly, perhaps full-blown AGI isn’t actually needed to fool the Android detector. OpenAI’s GPT-4 is not generally intelligent, but is has had such an insanely massive training set that it can ‘discuss’ just about anything you ask it and in lots of different intonations. Usually what gives it away as an AI is its confabulations of sources and events but human beings are bullshitters too sometimes so that’s not all too conclusive either. All this to say, I’m not convinced by Descartes’s second limitation either. I think if roboticists can solve the uncanny valley problem then the AI engineers can definitely create functional mindware sufficient for fooling the naked eyed-observer.
All in all, Descartes was a freakin’ genius. He’s wrong about androids being able to fool us and probably wrong about AGI being practically impossible. If AGI is impossible it’s probably impossible for a different reason than any practical limitation. But what an absolute legend he is for thinking about this stuff way back in the early-mid 1600s! All three pictures in this post were generated by Dall E 2 by the way, just a little cheeky poke at old Descartes.
If you guys like this kind of content, make sure to sub to my stack. If you want to learn more about AI, I recommend these two books (these are affiliate links so buying them here supports my work):
Artificial Intelligence: A Very Short Introduction by Margaret Boden
Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
[1] UrbanDictionary: A crazed and or obsessed fan. The term comes from the song Stan by Eminem. The term Stan is used to describe a fan who goes to great lengths to obsess over a celebrity.
[2] The adage of Ovid: bene qui latuit, bene vixit.cited in the introduction to Discourse by Descartes scholar Ian Maclean, Oxford World Classics edition, xvi.
[3] Discourse, 48.
[4] Discourse, 46.
[5] Discourse, 46-47.
[6] https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf
[7] Margaret Boden, Artificial Intelligence: A Very Short Introduction, 107 – grab the book here to support my work: https://amzn.to/3tkPZoT
[8] Ibid.