?

Log in

No account? Create an account

Previous Entry | Next Entry

Shelly challenged me with this one a while ago, and I was just reminded of it by a conversation in IM.

Let's say that you suspect that you are living inside the Matrix--that is, the reality into which you were born and in which you and everyone else lives is a simulation.

Would it be possible, using only the tools and observations you have available within that simulated reality and without any referent to anything outside the simulation, to demonstrate conclusively that you were living in a simulation? And by the same token, would it be possible to demonstrate, if it turned out that you were not living inside the Matrix, to demonstrate that you were not? If so, how?

Got me stumped. Ideas?


Comments

( 31 comments — Leave a comment )
Page 1 of 2
<<[1] [2] >>
indywind
Feb. 24th, 2006 08:43 pm (UTC)
Sorry, can't help with that one.
That's one of the few ideas that'll give me nightmares if I think too hard on it.
dilletante
Feb. 24th, 2006 08:49 pm (UTC)
this was descartes's thought experiment, right? suppose you're just a brain in a vat, fooled by an evil genius into thinking you're perceiving things...

i've never been able to get any further than he did. ;)
ktar
Feb. 24th, 2006 08:50 pm (UTC)
Bwa ha ha, same post at the same time. =D
ktar
Feb. 24th, 2006 08:49 pm (UTC)
No. Descartes says no, and I have been unable to find flaw in hiis argument.

(See Meditations I&II but skip the last 4, they're garbage)
dilletante
Feb. 24th, 2006 09:04 pm (UTC)
actually, though my memory is fuzzy, i seem to remember he used a more-or-less literal deus ex machina to get out of it, and i'm not sure i'm willing to buy into that. but alas, that leaves me stuck being certain only that i exist, which is cold comfort.
(no subject) - sweh - Feb. 24th, 2006 09:33 pm (UTC) - Expand
(no subject) - baerana - Feb. 25th, 2006 10:46 am (UTC) - Expand
sweh
Feb. 24th, 2006 08:58 pm (UTC)
This is essentially just a modern reformulation of the questions asked by Descartes. He used the concept of an evil demon. "Descartes supposed there was a powerful evil demon whose vocation it was to deceive us. Such a mighty creature would be well equipped to feed you sensations of all sorts. The world would seem to you as it does now, but there would be nothing correspondent to any of your perceptions."

Today we ask similar questions but using virtual reality instead of a demon.

The end result of this line of thought is merely "I think, therefore I am", and that can easily lead to solipsism. Nothing you believe you sense may have an objective reality. If you can't trust your senses then you have nothing.

What is reality? In my opinion it's nothing more than shared consensus on what we believe we perceive.

See http://www.philosophers.co.uk/cafe/phil_aug2001.htm for more on Descartes.
ktar
Feb. 24th, 2006 10:30 pm (UTC)
In my Intro to Philosophy class we had to write a paper on his argument, but switching "Evil Demon" with a Psychology Undergrad who shot you with a tranquilizer dart and kept your brain hooked up to a myriad of machines underneath the one of the Science buildings.

It always made me chuckle because I'm a Psychology Undergrad.
(no subject) - sweh - Feb. 25th, 2006 12:01 am (UTC) - Expand
justbeast
Feb. 24th, 2006 09:42 pm (UTC)
That question is touched upon briefly in Iain Banks's latest novel, The Algebraist.

One of the plot points in the novel was that the galactic civilization there claimed to have come up with the first 'scientifically proved' religion -- that in fact our universe /was/ inside a simulation. And the various orders and implications that stemmed up from that -- various missionary arms formed to 'wake up' everybody, or at least inform them that they're living in a simulation. It was believed that once every sentient (or at least a critical mass) will be informed of this fact, the game would be up, and the simulation would... end? change into a new one? Something.
The upshot of it was that the civilization was free to pursue the usual imperialist/expansionistic strategy.

Now, since this is just a scifi book, it was just claimed with the usual hand-waving, and he didn't go into /how/ they proved it.

I would imagine that it's a matter of physics research, rather than philosophy or sophistry. That is, once they get down low enough, to see various suspicious properties of fundamental particles that would imply that something is off, that we're in a simulation.

Of course, if it was an absolutely perfect or seamless simulation, there would be no way to tell. And why would we care, in that case?
That's the thing that the Matrix, and its ilk, do not address. If the simulation is perfect, why privilege the 'real' reality?
You only want to escape if the simulation is substandard, or somehow harmful.
fuzzybutchkins
Feb. 25th, 2006 03:23 am (UTC)
but that's the rub. the matrix *wasn't* perfect. recognising the errors are what led to Neo waking up, and arguably, the imperfections are what made Agent Smith so darned grouchy all the time. (as i remember it, Smith resented his need to exist to herd the "human infection" around. if the matrix were perfect, there'd be no reason for agents to exist.)

it's a very platonic idea that God is perfection, and Creation is inherently flawed due to the limitations of the material used (which, incidentally, is essentially the opposite assuption of the Intelligent Design folks, isn't it?). it follows that things generated by beings who are a part of creation would have compounded imperfection.

my theory is that when beings from Universe 1.0 create SimUniverse Online, the simulation fundamentally cannot be as seamless as the original, and beings from U1 who are inserted into SUO will eventually notice, since they are by nature more perfect. Maybe it'll take advanced partical research, or maybe all of a sudden you cat will bluescreen one day. My gut, for no particular reason, says that the proof will present obviously, spontaneously, and self-evidently... but don't ask me to argue why ;-) probably too much reading of the Christian Revelation mythos.

it;s the ultimate cataclysmic event, isn't it? the universe seems seamless and perfect until it isn't. If you say that there's no reason to question a perfect sim, and i say that discovery of the sim is a foregone conclusion... *laughs* i guess we haven't actually gotten anywhere except to spread the message of "Relax!"
(no subject) - justbeast - Feb. 25th, 2006 05:48 am (UTC) - Expand
(no subject) - fuzzybutchkins - Feb. 25th, 2006 06:16 am (UTC) - Expand
serolynne
Feb. 24th, 2006 09:46 pm (UTC)
Simple.

Are hot chicks in red dresses winking at you? If yes, you're in the Matrix.
dariens_haircut
Feb. 24th, 2006 10:54 pm (UTC)
Now, you tell me? You couldn't tell me that before the thing?
kiwitayro
Feb. 24th, 2006 10:47 pm (UTC)
have you been reading my journal again? hehehe.

feorlen
Feb. 24th, 2006 11:36 pm (UTC)
To reply to your post, I went looking for a good translation of "The Republic." I found these...

http://www.geocities.com/larkspur10/neo/matrixplatoscave.html
http://www.geocities.com/freeyourbrain/cave.htm

perky_bear
Feb. 25th, 2006 10:36 am (UTC)
It's been a long time since I read Plato and Descartes, but going back farther than that, I remember a discussion with my father. We were discussing the development of modern astronomy. The sense of the discussion was that determining ultimate reality is very difficult. If we're living in a matrix, it's our reality and we can't determine otherwise because we have no clues, no data.

You can't prove something for which there is no data, no yardstick, and no way of developing either. Descartes' cogito ergo sum may have breen brilliant, but Locke's analysis of the basis of knowledge was more useful.
baerana
Feb. 25th, 2006 10:48 am (UTC)
someone inside the system can never prove it one way or another

and I'm ok with that :)

I choose to believe there isn't an elephant in the room when I close my eyes, and I choose to believe we aren't in a matrix or the dream of an evil demon. If it turns out we are, eh, I did the best I could w/ the information at hand. Or, the demon dreams I did, either way :)
physicsduck
Feb. 25th, 2006 02:20 pm (UTC)
Oh deliver me from amatour philosophers....

Ok bitches, here's how it is, let's start simple.

I think the way you answer this begins with what you make of the question itself, and what grounds someone has to pose it and reject the answers you might give. One thing I would say is that the question is a set up from the start; someone is asking you to stipulate a system in which you can’t tell fake from real and then saying that you can’t tell fake from real. Well, duh. But on the other hand, the possibility that we are radically deceived is not completely outlandish. I’ve taught more classes than I can recall where I stood at the front of a room and tried to convince my students that they might be merely dreaming that they are sitting in a classroom, and as my reward for all this, I now have dreams in which I’m teaching philosophy classes. Karma’s sneaky that way. So we have to at least say that radical deception is at least a remote possibility for us. The question is, what should we do about it?

Let’s call those who assert that the possibility that we are brains-in-vats, connected to supercomputers that stimulate the various nerve endings leading to various parts of our brains so that we feel like we’re experiencing a normal life, ‘skeptics’. To motivate this, you have to begin by making explicit an assumption that the skeptics trade on: knowledge requires certainty. That is, you might have beliefs of which you are kind of convinced, or even some that you have really good reason to believe, but no belief counts as knowledge unless you have absolute perfect certainty without even the most remote doubt or possibility of being wrong. The brain-in-a-vat case is there to tease out that intuition in you; you normally think of yourself as knowing ordinary things like “there’s a keyboard in front of me right now,” but you have to admit you can’t totally rule out the brain-in-a-vat possibility, so you’re not certain and therefore, you don’t know.

You can think of responses to this falling into two large families. There are some who will agree that knowledge requires certainty, and therefore our job is to find some things that are certain that we can claim to know. Call this position “infallibilism.” On the other hand, there are those who would say that the skeptic is imposing an unreasonably high standard and we can claim to know a great many things; the challenge from here is just to articulate a reasonable standard or ways of fixing a standard that will tell us what counts as knowledge. Call this position “fallibilism.” The fallibilist is not saying that any old belief counts as knowledge, and the standard they suggest may still be quite high, but it will leave room for knowledge in cases in which there is at least some remote possibility that we are mistaken. So, normal cases in which I am not a brain in a vat and all my other beliefs and experiences tell me so are probably cases in which I know most common sense things. There are long, hard fights about whether one should be a fallibilist or an infallibilist. The infallibilist has a certain kind of intuition on their side. As soon as skeptics bring up the possibility of error, many people immediately feel a sense that they don’t know what they thought they did. This is the high standard of certainty coming to mind and playing a role for us, the skeptic will say. The fallibilist has a certain common sense intuition on their side as well, though. You negotiate evidence and possible sources of error all the time, and you intuitively come to a point of satisfaction in just about every case where you’re not talking to a skeptic. This suggests that skepticism itself is introducing something foreign to our thinking about knowledge. Infallibilism is a theory with a long history and a lot of deep problems. Fallibilism is much more popular today, but it has a host of problems, as well. I won’t try to tell you which one to buy into here, though I’ll admit that I’m a fallibilist. For now, let’s just talk about the different sub-families of the two.

physicsduck
Feb. 25th, 2006 02:21 pm (UTC)

Infallibilism
The Granddaddy of all infallibilists is definitely Rene Descartes. He was writing on the cusp of a resurgence of skepticism in Europe, driven by a growing dissatisfaction with Catholic doctrinal authority, and he thought the only way to settle the question once and for all was to take on the toughest skeptics and respond with a model for knowledge that would make us absolutely certain about at least some of our beliefs. Being a mathematician by training (as in: Cartesian coordinates), his hero was Euclid, and he wanted the theory to look like Euclidian geometry. So he figured you should start with things that were absolutely certain (like axioms) and then deducing things that followed from those foundations with absolute logical necessity. That way, you start with certainty and every move away from the foundation guarantees that that certainty carries over to other beliefs. You just need something to get started with, and this is where you get Descartes giving modern philosophy’s best known quote, “I think, therefore I am.” Most people who mouth that off take it as a sort of pronouncement about the fact that you think, but it plays that sort of foundational role in Descartes’s view. Whatever else might be true or however I might be deceived, I know that I have thoughts and therefore, I know that I exist. I don’t yet know what kind of thing I am, of course. I could be a person with a body as common sense suggests, or I could be a brain in a vat, or I could be some sort of disembodied soul in a void, but whatever *I* am, *I* exist. So there’s something I can be certain of, an infallibilist would say. (There are some fallibilists who think knowledge has this kind of foundation/resting on the foundation structure, but most infallibilists do accept something like it.)

The tricky bit is, the kinds of things I can say with that sort of certainty are pretty limited, so if you want to get back to knowing ordinary everyday things, you’re going to have to build on them. Trouble is, that’s where the skeptic has you pinned pretty well. Even if I am certain that I exist, all the experiences that I’m having or have had are just the sort of things that could happen to a brain in a vat at the hands of a mad scientist, so I can’t trust any of that. The sorts of things I do in logic or mathematics don’t depend on experience in those ways, but they’re things on which I can make mistakes and not realize it. So even with that base of certainty, you can’t go too far with it. I might very well be a brain in a vat, and I just can’t be certain that I’m not.

Some people are content to swirl around in the circle of their own thoughts, and think that ther’s enough there to work with. Some philosophers have adopted a position called “phenomenalism,” which states that our common sense beliefs are actually a kind of code or shorthand for much larger sets of statements about the qualitative aspects of our experiences. So statements and beliefs about tables are shorthand for larger sets of statements about table-shaped and table-colored and table-textured and, I suppose, table-flavored sense data. The advantage there is that even if I don’t know that there is a table in front of me, I can say that the current state of my visual field includes some table-shaped regions of such and such color. Even if I’m a brain in a vat, I can be certain that those features of the experiences I have are actually there. So if I can just figure out some way of casting all my beliefs as really, deep down, being about that stuff that I’m so certain of, I’ll have most of my beliefs back. If this sounds weird, it’s fallen pretty squarely out of favor in the last fifty years or so, thanks in no small part to one of my philosophical heroes, Wilfrid Sellars. Not too many folks like this around, so moving on to fallibilism…
(no subject) - physicsduck - Feb. 25th, 2006 02:22 pm (UTC) - Expand
(no subject) - datan0de - Feb. 25th, 2006 05:38 pm (UTC) - Expand
datan0de
Feb. 25th, 2006 06:44 pm (UTC)
Didn't we discuss this in person last weekend, and didn't you mention something about it being mathematically impossible to completely emulate a complex system with 100% accuracy? I could very well be mis-recalling our conversation.

At any rate, since it is impossible to prove a negative, we can't prove that we aren't brains in vats. However, it should in theory be at least possible to prove that we are, if indeed that's the case. If we aren't brains in vats, we can never definitively establish this fact. Deal.

Let's throw philosophy to the side for a minute and approach this as an engineering problem. Here you are inside a complete simulation, attempting to determine its true nature. The only way to do this would be to force some manner of interaction with the real world. I see three ways of doing this:

1) You can sit and wait for something outside the box to make a change to the environment that absolutely goes against the internal rules of the simulation, proving the existence of something external. This strongly resembles doing nothing, and therefore isn't terribly interesting.

2) You can attempt to discover an inherent inconsistency in the simulation- some property of the virtual environment that breaks the continuity of its own rules. This is a little more interesting. If it were as simple a matter as "every time I bang these two rocks together everyone around me sees ominous white glyphs against a blue background" it would be trivial, but that would indicate a very poor virtual environment indeed. Any system engineer worth his salt should be able to create a more robust system than that.

No, the flaws would likely be more subtle. For example, if you found that the laws which govern matter on a quantum level and the laws which govern matter on a macro scale were mutually incompatible, this would lend strong evidence to the unreality of our existence. ;-)

3) You can try to actively force the simulation to be influenced by factors outside of it. This might not be as difficult as it sounds, particularly since the simulator itself is outside of the simulation. A simulator must by necessity be more complex than the simulation. Attempting to simulate the entire known universe (or at least the portions of it that we are perceiving at any given time) must be quite a chore for the simulator hardware. If we can sufficiently tax that hardware even further then we can effectively interact with the limitations of the hardware by producing measurable effects within the simulation.

In the simulations/emulations with which we are already familiar pushing the hardware limits can produce noticeable effects within the simulation- dropped frames, artifacting, registers being counted faster than they're updated or vice versa, etc. We can try this in our current environment and see if we get any odd effects. For example, if we were in a simulation we might expect to find that when we accelerate an object to near the limits of the simulation the "dropped frames" effect might manifest itself in the form of the the simulator being unable to update the object's internal reference at the normal rate, such that time would appear to pass more slowly for the object than for a stationary observer. The simulator might not be able to update its outside visual reference fast enough either, causing the object to appear to contract from the point of view of a stationary observer. Furthermore, in each clock cycle, the registers that record the location of the object's mass might be updated for the new location before they're cleared for the old location. Due to bits of the mass being counted multiple times, the closer the object got to the simulation's hard velocity limit the more mass it would appear to have to an outside observer.

This would probably be a consistent, measurable phenomenon. The equation for this mass multi-counting (let's call it "mass dilation") could be something along the lines of this:

observed_mass = actual_mass / 1 - sqr root ((velocity^2)/(c^2)), where "c" is whatever the velocity hard limit of your simulation is.

Hypothetically, of course. ;-)
wolfger
Feb. 25th, 2006 09:14 pm (UTC)
We would only be able to tell if the simulation was in some way flawed. And even then, we would have to be aware of that flaw. For example, what if the flaw was that ducking into a particular phone booth teleported you elsewhere... You would have to be aware that it was an error in the simulation, and not real. Otherwise, it would seem like you had just stumbled upon some top secret device that is "real".
papertygre
Feb. 26th, 2006 12:42 am (UTC)
Not having read all of the comments above, my thoughts are:

A simulation isn't perfect, by definition. If it were perfect, it would *be* the real thing.

So if you could find and demonstrate some contradictions in the simulation, you could show that *something* weird is up, from within the simulation.

Although on second thought, if you found some apparent contradictions, it might be hard to show that you didn't just make a mistake. And even if people double checked you until everyone was 100% sure, here's a weird anomaly, then so long as you have no way of getting outside the simulation, it's hard to be sure that the world doesn't just happen to be that way.
zotmeister
Feb. 26th, 2006 02:33 am (UTC)
There's this delightful movie called The Thirteenth Floor that unfortunately not a lot of people have seen because it came out shortly after The Matrix; it demonstrates the if-it's-a-simulation-it-has-a-weakpoint stance. I recommend it to anyone and everyone involved in this topic.

There's also this delightful mathematical principle called Gödel's Incompleteness Theorem that states no system can prove its own consistency, so it's hopeless to determine the reality factor of one's existence without external prodding. Which, incidentally, is exactly what happens - unintentionally - in The Thirteenth Floor. It's a clever little flick. - ZM
Page 1 of 2
<<[1] [2] >>
( 31 comments — Leave a comment )