I think that the Simulation Hypothesis is self-defeating. I’ve had this contention for a long time now and I’ve had lots of my philosopher friends tell me that it’s actually just a waste of time to chase down this or any other argument against the Simulation Hypothesis—indeed some have told me that it’s these kinds of questions that get philosophy programs defunded, so sorry if this post contributes to that but I just can’t help myself, blame it on my programming.
I’ve also had philosopher friends ask me to flesh out my thoughts and arguments against it only to show me why I’m wrong, which is helpful and awesome and I love you folks. I’ve had other philosopher friends tell me “you’re spot on, Park, good work, you’re handsome and you’re the best of us” but maybe not in so many words… But all that to say, I’ve been thinking about this for a while and have some thoughts that my readers might enjoy. I don’t think this post will be totally stupid, but it might be—even still, you’ll learn something about the Simulation Hypothesis, so there’s that. Here are my latest thoughts on the Simulation Hypothesis (‘SH’ from here on out because if I have to type that out a thousand times I’m going to puke).
As it turns out there isn’t just one SH, there are lots of different variants depending on who you ask. There are many questions that an SH theorist will need to answer and they don’t all answer them in the same way. Here’s a quick smattering of the kinds of questions I have in mind:
Are we wholly simulated beings who live in a computer?
Do we have a body in base reality that’s being fed a virtual reality through a brain-computer interface?
Is there any base reality at all?
What counts as a ‘simulation’?
Is a simulated world an illusion?
What’s the purpose of the simulation we find ourselves in?
Are the simulators interested in counterfactual history and that’s why they made us or is base reality something completely different?
Is the programmer our god, thee God, a human being, an alien intelligence?
Could we ever get evidence that we live in a computer simulation?
I could go on, but you get it. There are lots of questions to be answered and disparate answers give rise to disparate simulation hypotheses. Though there are many variants of SH, at its core, a SH will say something like this:
Simulation Hypothesis (SH) = the world we live in is not the most fundamental reality, but is a simulated reality running on an advanced computer in a reality at least one level more fundamental than ours.
The SH idea has been in science fiction for a while and it has had close cousins throughout the history of philosophy, including the brain-in-a-vat skeptical scenario (originally mentioned in Gilbert Harman’s 1973 book Thought ??), Descartes’s Dream Argument, Plato’s Cave analogy, Zhuangzi’s Butterfly Dream and more. But today, the father of the modern SH is Nick Bostrom.
Bostrom’s Simulation Argument
In his 2003 paper, “Are You Living In a Computer Simulation?”, Bostrom gives a trilemma argument (backed by some Bayesian arguments) that he calls “The Simulation Argument”
Bostrom’s trilemma goes like this:
One of these three propositions is true:
1. We will never be able to create simulated realities with sims in them.
2. We may be able to create simulated realities with sims but we will refrain from making them, perhaps for moral reasons.
3. We are already living in a simulated reality.
If you don’t have a good reason for affirming 1 or 2 then you’re stuck with 3. Why think that? Well, Bostrom asks the reader to consider the rapid advancement of scientific progress and project it out into the future. With the assumption that consciousness can be realized in computers, it’s not too crazy to assume that we’ll be able to produce entire virtual world simulations with their own virtual, conscious beings, call them ‘sims’. How long until we have that kind of tech? 10 years? 50 years? 1,000 years? It doesn’t matter, if it’s possible, then at some point, it’s likely that the majority of conscious beings like ourselves will actually be sims instead of base reality beings. Think about it, there’s one base reality but there could be billions or trillions of simulated worlds filled with conscious beings. There could even be nested simulations within simulations that go down many levels. So, if most conscious beings that exist are sims, then you’re probably a sim.
James Anderson’s Self-Defeat Argument
I originally started looking for a self-defeat style argument against SH after reading up on transcendental arguments à la James Anderson (namely his analysis of Cornelius Van Til’s transcendental reasoning about God) and William Hasker (especially his 1973 paper “A Transcendental Refutation of Determinism), folks working on C.S. Lewis’s Argument from Reason against forms of physicalism in the philosophy of mind, and Alvin Plantinga’s Evolutionary Argument Against Naturalism in epistemology. I eventually found exactly what I was looking for in a couple of James Anderson’s blog posts, you can and should read them here: https://www.proginosko.com/tag/simulation-hypothesis/
Anderson argues something like this: SH is predicated, at least in part, on our scientific knowledge of computers, simulations, and how brains work. But if simulation hypothesis were true, then we’d have a defeater for all of our empirical beliefs, which are largely the basis for our scientific knowledge. We’d acquire a defeater for our empirical beliefs on SH since everything we come to believe on the basis our senses would not be veridical but instead would be a kind of digital illusion. So if SH gives us a defeater for our empirical beliefs, then our scientific knowledge is defeated and drags our belief in SH down with it. And thus, SH is self-defeating. If you believe SH, then you can’t be justified in believing SH. So don’t believe SH.
Chalmers vs. Anderson
However, in his paper, “The Matrix as Metaphysics” and probably more recently in his book Reality+ (I say probably because I don’t want to go read through it again or I’ll end up in a research black hole and never finish this post (I know because I’ve done that many times (including on previous iterations of this very post))), David Chalmers has argued that something like Anderson’s self-defeat argument is too quick. Why think that a simulated being, a sim, doesn’t have veridical empirical beliefs? Chalmers gives a kind of ‘qua’ move reminiscent of medieval philosophers performing partitive exegesis when considering Scripture and the nature(s) of Jesus Christ, e.g. qua (according to) Christ’s humanity, he was able to grow, eat, sleep, be ignorant of certain facts (like his return to earth), but qua his divinity he was perfect, incapable of change, omniscient, etc.
Chalmers’s SH qua move goes something like this: a sim’s empirical beliefs are largely true qua their simulated world, even if their empirical beliefs are not largely true qua the base reality outside of their simulated world. If this qua move works, then it looks like Anderson’s self-defeat argument above doesn’t go through.
Recall, Anderson argues that on SH all of our empirical beliefs and all of (or most of) our scientific beliefs are non-veridical—they do not correspond to reality—but those kinds of beliefs served as the motivation for SH, so SH defeats itself, it cuts off the branch it was sitting on. But Chalmers comes through with his SH qua move and says “not so fast, just because a sim’s empirical beliefs are not veridical qua base reality, does not mean they aren’t veridical qua the sim’s own simulated reality.” If that’s right, then the sim’s beliefs aren’t false simpliciter, just false concerning base reality and so SH isn’t self-defeating?
Well, first off, I’m not so sure that Chalmers’s qua move is legitimate. Consider the movie, The Truman Show (which I think is probably a rip-off of Philip K. Dick’s Time Out of Joint). Truman thinks he’s living a normal life with his wife in a small town, but in reality, he lives in a massive studio and he’s the star of a reality tv show that’s been filming him 24/7 since his birth (maybe even in utero, I don’t remember). Does Chalmers’s qua move work for Truman? Qua the world portrayed to Truman, call it the TV world, Truman is a regular citizen, he knows where he works, he knows his wife’s name, he knows who his best friend is, etc. But qua the real world, all of that is false and he is being systematically deceived. Does it really make sense to say Truman has knowledge qua the TV world? I don’t think so. So, what’s the significant difference between Truman and a sim who lives wholly in a virtual simulated world?
Now, I’m probably missing some nuance of Chalmers’s SH qua move, but it seems to me that even if we grant its legitimacy, the move doesn’t evade Anderson’s argument. Bostrom motivated his simulation argument by asking us to consider the rapid scientific and technological advancement in our history and then project it out into the future. If you don’t have a reason to affirm one of the other two propositions, then you’re left with the proposition that we are already living in a computer simulation. But if we’re living in a computer simulation, then why should we think that the scientific history we reasoned about is true in base reality as well as in the simulated reality that we inhabit? The qua move says that our knowledge of scientific history counts as real knowledge qua the simulation, but we need to know base reality scientific history in order to motivate the belief that we live in a computer simulation, so it doesn’t matter if we have simulation knowledge if what we need is base reality knowledge.
Sure, in our history it looks like scientific and technological progress is increasing at something like an exponential rate, but if I come to believe that I live in a computer simulation, then why should I think that this progress is likewise true of the reality that my simulator lives in? Maybe our technological progress, qua simulation, has no connection whatsoever to the progress that took place in base reality. It seems illegitimate to use our history of progress as the means of motivating SH since once we come to affirm SH we no longer think that our history corresponds to the history of base reality, the reality where our simulators are said to have created our simulation.
So it looks like Chalmers’s qua move doesn’t get us past Anderson’s self-defeat argument. We don’t need knowledge qua simulation to motivate Bostrom’s simulation argument, we need knowledge qua base reality and if we do live in a simulation then that’s precisely what we do not have. So again it looks like SH motivated by Bostrom’s Simulation Argument is self-defeating, once you come to affirm it, you’re no longer justified in affirming it.
But while Anderson’s argument specifically targets Bostrom’s motivation for SH, I’ve been on the hunt for something more comprehensive, an argument that targets all forms of SH. This is probably a foolhardy endeavor but maybe I can get some good feedback from you all and finally give it up.
Parker’s Proposed Transcendental Argument Against Simulation Hypotheses
Here’s a transcendental argument against SH that I’ve been thinking about:
1. A necessary condition of being rational is that one (at least implicitly (or upon reflection?)) trusts their knowledge forming processes to reliably lead them to true beliefs.
2. If you come to believe that you live in a computer simulation, then you have a reason not to trust that your knowledge forming processes reliably lead you to true beliefs.
Therefore,
Conclusion: it is not rational to believe you’re living in a computer simulation (even if you are).
I think (1) is pretty straight forward but (2) may raise some questions. So why think that coming to believe SH gives you a reason not to trust your knowledge forming processes to reliably lead you to true beliefs? Let me just spam two half-baked reasons:
Chalmers’s Simulation Hypothesis Qua Move Weaponized
According to Chalmers’s qua move, there are two truths at play when considering SH, the truth qua base reality (BR) and the truth qua the simulated world (SW). Qua SW, I am a flesh-and-blood human person but qua BR I am a simulation of a person, maybe I’m just a background NPC or maybe I’m a simulation of a BR person named Parker Settecase. Whatever the case, in affirming SH I affirm that while I appear to be a flesh and blood person, more fundamentally I am a set of pixels on a screen or information bits on a hard drive or something like that. I am a simulation of a person, and not a BR person as I thought. I see simulations of trees, squirrels, dogs, cars, etc. which present as everyday objects but are more fundamentally bits presented to me as if they were BR items. In affirming SH, I come to realize that everything I took to be a part of the world is actually a part of a simulation of the world that itself exists in BR. What I took to be BR, is actually SW and I was systematically mistaken about my world and my place in it. It seems to me like this one BR belief wrecks my SW beliefs, I am now Truman after coming to find out I’m on a reality TV show and that seems like a reason to doubt that your knowledge forming processes reliably lead to true beliefs since most of your beliefs have been wrong qua BR.
Improper Function and Inscrutable Intentions
A second and related reason one might come to affirm premise 2 of my proposed transcendental arg. against SH is what I call Improper Function. Proper Functionalism, or Reformed Epistemology, is a view in epistemology championed by Alvin Plantinga (and others) wherein a thinker has warrant for their beliefs when their cognitive faculties are functioning properly, in the cognitive environment that said faculties were designed to function in, and when the design plan of the faculties is a good one, i.e., when the faculties are aimed at truth and reliably produce true beliefs in the thinker.
Now if you come to affirm SH, then it seems like you should also affirm something like Proper Functionalism. Your cognitive faculties, your knowledge forming processes, have been designed by a computer simulator to function in the simulated world you inhabit. If that’s the case, why think you can reason about the truths of base reality, a cognitive environment that you haven’t been designed for? Maybe you think “sure I’m reasoning about a different cognitive environment than the one I was designed for, but I’m still reasoning in the appropriate environment so it doesn’t matter”. And that’s fair enough, but why think that your simulator has designed your knowledge forming processes to reliably lead you to true beliefs? This seems to come down to what you take to be the intentions of the simulators.
What’s the purpose of the simulation that we live in? Perhaps we are all NPCs in a dating app designed to test the compatibility of two potential lovers like the Black Mirror episode, Hang the DJ (s4 e4). Perhaps the simulators intend our simulation to be a counterfactual history of their own base reality, then presumably they would design you to form beliefs consistent with those of base reality persons in the year 2024 rather than give you carte blanche knowledge forming processes that base reality thinkers have. There are lots of different scenarios we could paint but it seems to me that the intentions of the simulators are important for determining whether or not we can trust our knowledge forming processes to reliably lead us to truth but the intentions of the simulators are inscrutable to us. So while that may not be a positive reason to distrust our knowledge forming processes, it does seem to be a reason to not to trust them or to withhold trust in them or something like that.
So that’s that for now. I’m far less optimistic about my argument than I used to be but this latest iteration is stronger than the last few. I’m also more confident in James Anderson’s arg. after putting it in touch with Chalmers’s qua move.
I’m new to philosophy, especially the analytical side of things. I also really enjoy the content you put out both here and YT, but I have a question about your second premise. If I understand it correctly: if you believe you’re in a simulation then you can’t trust your knowledge forming process.
I don’t see why it follows that sims in a simulated reality can’t trust their knowledge forming process with respect to the reality that they’re in. If the simulation is programmed executed with a set of rules, like any physics simulation is, and the sims make observations that eventually reveal the rules of their reality, then it seems to me they have plenty of reason to trust that process.
I don’t see why the sims need to understand the base reality over their own to trust their knowledge forming process. It just means their knowledge forming process is bounded to claims about the rules of the simulated reality.
Okay, to be fair, what doesn't get philosophy departments defunded at this point? If it isn't theory of mind, philosophy of science, logic or game theory, it doesn't seem to stand a chance anymore.