I’m new to philosophy, especially the analytical side of things. I also really enjoy the content you put out both here and YT, but I have a question about your second premise. If I understand it correctly: if you believe you’re in a simulation then you can’t trust your knowledge forming process.
I don’t see why it follows that sims in a simulated reality can’t trust their knowledge forming process with respect to the reality that they’re in. If the simulation is programmed executed with a set of rules, like any physics simulation is, and the sims make observations that eventually reveal the rules of their reality, then it seems to me they have plenty of reason to trust that process.
I don’t see why the sims need to understand the base reality over their own to trust their knowledge forming process. It just means their knowledge forming process is bounded to claims about the rules of the simulated reality.
Thanks for this comment! Did you read the reasons I give for motivating premise 2? Seems to me that the design plan and motivations of the programmer(s) would be inscrutable to those in the simulation, and if inscrutable then they don't knownif they can or should trust them to reliably produce true beliefs, whether BR truths or SW truths. Furthermore, if their knowledge forming processes are bounded to SW truth then they can't know base reality BR truths, like the fact that they live in a computer simulation. So the belief < I am living in a computer simulation > would be self-defeating for a sim even if it's true.
I did, but after reading your reply I had to go back and reread them. I definitely didn’t pick up on some things on my first read through. I can see how the intentions of the programmer could be to obscure or even misdirect the sims to find a rule set that isn’t the actual rule set governing the program… I also just had to go read some more examples of self defeating arguments lol.
I think the dissonance I’m getting is from thinking of truth claims in a pragmatic way where the beliefs of the sims have repeated utility and *appear* to be true but aren’t true in the sense they’re identical to the rules of the program.
I’m sure I’m wrong too, but I feel like I’ve learned something. Thanks again!
My pleasure, man! I love/hate thinking about this stuff 😄 I'll just say that lots of people respond with similar lines of thought to yours so you're in good company, I just think you guys are wrong 😉
Okay, to be fair, what doesn't get philosophy departments defunded at this point? If it isn't theory of mind, philosophy of science, logic or game theory, it doesn't seem to stand a chance anymore.
I actually think if profs did rigorous work on this, they'd capture the imagination of the stem folks and pit philosophy back into to popular conversations. A interdisciplinary undergrad class on SH, with like computer science, religious studies, cultural studies, and philosophy would be awesome
Does your transcendental argument still go through even if one rejects Proper Functionalism? (I affirm proper functionalism, btw)
Also wouldn't an internalist, especially a strong internalist like Tim McGrew, dispute premise 1, about reliability? Or would he affirm that one's cognitive processes need to be reliable?
Great questions! So I'm not a proper functionalist about base reality but I think a SH proponent ought to be about sims in a simulated world if they think sims are created by a simulator. They've been designed to reason in a particular cognitive environment for a particular reason. Unless they think the sims are like A-life and are the product of evolutionary algorithms, I don't think that's the ususal story. But the inscrutable intentions of the simulator is a stronger point I think, so even if improper functionalism is no good, the transcendental arg might still go through. Additionally, there could be other reasons one affirms premise two that I haven't broached in the piece.
Not sure what Tim would say, I'm good friends with one of Tim's best students and he usually says something like "what if I have a priori justification for the reliability of my cognitive faculties, then I wouldn't be concerned with these self-defeat style global skeptical threats." And I might say, yes but on SH you wouldn't have a priori justification, or coming to believe SH causes you to doubt your a prioi justification and thus you ought not affirm SH. But idk. Those dudes are stupid smart. I think they'd still have room for reliability but they want some awareness and not just an external is "we're in the good case" hand wave 😅
Those are good responses! Very helpful! I'll have to think more about your argument. I myself enjoy self-defeats and reductios, but they are hard to pull off for sure!
I'm going to hand this out to my Intro to Philosophy students when we do our unit on Descartes / The Matrix. I think they'll appreciate both the humor and clarity of your prose. (This is Jordan, btw.)
"So if SH gives us a defeater for our empirical beliefs, then our scientific knowledge is defeated and drags our belief in SH down with it. And thus, SH is self-defeating. If you believe SH, then you can’t be justified in believing SH. So don’t believe SH."
This feels very similar to Hume vs Kant regarding the Problem of Induction. So the SH believer, similar to those inclined towards scientism, must take a leap of faith in order to maintain a fundamental view of reality.
Something I noticed is that reasoning must be valid in both simulated and base reality for the argument to be valid.
PART 1
1. The conclusion of the simulation trilemma is that we probably live in a simulation run on a base reality.
2. The conclusion we come to rests upon reasoning we assume is true.
3. If our reasoning isn't also true of base reality, then the conclusions we've reasoned to about base reality (it is running our simulation) are false.
C1. The conclusion we drew from 1 is based on false reasoning, and we do not live in a simulation, OR,
5. The same rules of reasoning apply to both base and simulated reality.
But, if the simulation trilemma can reach the same conclusion in base reality as in a simulation, then it self-contradicts.
PART 2
5. The rules of reasoning in this reality are the same as base reality.
6. Since the rules of reasoning must be the same, the same conclusion (1) can be arrived at in base reality.
7. That conclusion arrived at in base reality is false.
You seem to be arguing that Chalmers' qua distinction fails because we don't know that scientific/technological advancement in the (supposed) base reality is increasing at the rate of our (supposedly) simulated reality.
But wouldn't the (supposed) base reality inherently be one of extreme scientific/technological advancement, given that it has (supposedly) generated sims?
Nah that's not quite what I'm arguing. I'm saying, along with James Anderson, that our motivation for believing we live in a simulation is undercut once we come to believe that we live in a simulation because now we come to know our scientific/technological history is non-veridical (or at minimum we have a reason to withhold judgement as to its veridicality). But it could be further argued that we don't know anything about base reality. How do we know if simulating worlds is difficult or easy in base reality if our own scientific/technological history is does not correspond to base reality? And how are we to know if it does or doesn't correspond to base reality science/tech.
I’m new to philosophy, especially the analytical side of things. I also really enjoy the content you put out both here and YT, but I have a question about your second premise. If I understand it correctly: if you believe you’re in a simulation then you can’t trust your knowledge forming process.
I don’t see why it follows that sims in a simulated reality can’t trust their knowledge forming process with respect to the reality that they’re in. If the simulation is programmed executed with a set of rules, like any physics simulation is, and the sims make observations that eventually reveal the rules of their reality, then it seems to me they have plenty of reason to trust that process.
I don’t see why the sims need to understand the base reality over their own to trust their knowledge forming process. It just means their knowledge forming process is bounded to claims about the rules of the simulated reality.
Thanks for this comment! Did you read the reasons I give for motivating premise 2? Seems to me that the design plan and motivations of the programmer(s) would be inscrutable to those in the simulation, and if inscrutable then they don't knownif they can or should trust them to reliably produce true beliefs, whether BR truths or SW truths. Furthermore, if their knowledge forming processes are bounded to SW truth then they can't know base reality BR truths, like the fact that they live in a computer simulation. So the belief < I am living in a computer simulation > would be self-defeating for a sim even if it's true.
But I'm probably wrong on these tbh haha
Thank you for the quick reply!
I did, but after reading your reply I had to go back and reread them. I definitely didn’t pick up on some things on my first read through. I can see how the intentions of the programmer could be to obscure or even misdirect the sims to find a rule set that isn’t the actual rule set governing the program… I also just had to go read some more examples of self defeating arguments lol.
I think the dissonance I’m getting is from thinking of truth claims in a pragmatic way where the beliefs of the sims have repeated utility and *appear* to be true but aren’t true in the sense they’re identical to the rules of the program.
I’m sure I’m wrong too, but I feel like I’ve learned something. Thanks again!
-JMM
My pleasure, man! I love/hate thinking about this stuff 😄 I'll just say that lots of people respond with similar lines of thought to yours so you're in good company, I just think you guys are wrong 😉
That’s good to know! Haha, well it wouldn’t be the first time, and it definitely wouldn’t be the last! 😁
Okay, to be fair, what doesn't get philosophy departments defunded at this point? If it isn't theory of mind, philosophy of science, logic or game theory, it doesn't seem to stand a chance anymore.
I actually think if profs did rigorous work on this, they'd capture the imagination of the stem folks and pit philosophy back into to popular conversations. A interdisciplinary undergrad class on SH, with like computer science, religious studies, cultural studies, and philosophy would be awesome
I loved this article and would like to publish a response tomorrow.
Does your transcendental argument still go through even if one rejects Proper Functionalism? (I affirm proper functionalism, btw)
Also wouldn't an internalist, especially a strong internalist like Tim McGrew, dispute premise 1, about reliability? Or would he affirm that one's cognitive processes need to be reliable?
Great questions! So I'm not a proper functionalist about base reality but I think a SH proponent ought to be about sims in a simulated world if they think sims are created by a simulator. They've been designed to reason in a particular cognitive environment for a particular reason. Unless they think the sims are like A-life and are the product of evolutionary algorithms, I don't think that's the ususal story. But the inscrutable intentions of the simulator is a stronger point I think, so even if improper functionalism is no good, the transcendental arg might still go through. Additionally, there could be other reasons one affirms premise two that I haven't broached in the piece.
Not sure what Tim would say, I'm good friends with one of Tim's best students and he usually says something like "what if I have a priori justification for the reliability of my cognitive faculties, then I wouldn't be concerned with these self-defeat style global skeptical threats." And I might say, yes but on SH you wouldn't have a priori justification, or coming to believe SH causes you to doubt your a prioi justification and thus you ought not affirm SH. But idk. Those dudes are stupid smart. I think they'd still have room for reliability but they want some awareness and not just an external is "we're in the good case" hand wave 😅
Those are good responses! Very helpful! I'll have to think more about your argument. I myself enjoy self-defeats and reductios, but they are hard to pull off for sure!
I'm going to hand this out to my Intro to Philosophy students when we do our unit on Descartes / The Matrix. I think they'll appreciate both the humor and clarity of your prose. (This is Jordan, btw.)
Haha those poor students. But thank you bro, I always love your comments!!
"So if SH gives us a defeater for our empirical beliefs, then our scientific knowledge is defeated and drags our belief in SH down with it. And thus, SH is self-defeating. If you believe SH, then you can’t be justified in believing SH. So don’t believe SH."
This feels very similar to Hume vs Kant regarding the Problem of Induction. So the SH believer, similar to those inclined towards scientism, must take a leap of faith in order to maintain a fundamental view of reality.
Something I noticed is that reasoning must be valid in both simulated and base reality for the argument to be valid.
PART 1
1. The conclusion of the simulation trilemma is that we probably live in a simulation run on a base reality.
2. The conclusion we come to rests upon reasoning we assume is true.
3. If our reasoning isn't also true of base reality, then the conclusions we've reasoned to about base reality (it is running our simulation) are false.
C1. The conclusion we drew from 1 is based on false reasoning, and we do not live in a simulation, OR,
5. The same rules of reasoning apply to both base and simulated reality.
But, if the simulation trilemma can reach the same conclusion in base reality as in a simulation, then it self-contradicts.
PART 2
5. The rules of reasoning in this reality are the same as base reality.
6. Since the rules of reasoning must be the same, the same conclusion (1) can be arrived at in base reality.
7. That conclusion arrived at in base reality is false.
8. The argument can reach untrue conclusions.
C2. We do not live in a simulation.
You seem to be arguing that Chalmers' qua distinction fails because we don't know that scientific/technological advancement in the (supposed) base reality is increasing at the rate of our (supposedly) simulated reality.
But wouldn't the (supposed) base reality inherently be one of extreme scientific/technological advancement, given that it has (supposedly) generated sims?
Sorry if I've missed something obvious.
Nah that's not quite what I'm arguing. I'm saying, along with James Anderson, that our motivation for believing we live in a simulation is undercut once we come to believe that we live in a simulation because now we come to know our scientific/technological history is non-veridical (or at minimum we have a reason to withhold judgement as to its veridicality). But it could be further argued that we don't know anything about base reality. How do we know if simulating worlds is difficult or easy in base reality if our own scientific/technological history is does not correspond to base reality? And how are we to know if it does or doesn't correspond to base reality science/tech.
I see. I think the key thing I missed was the thought that simulating worlds might be easier in a hypothetical base reality than it is for us.