SIA on other minds

Another interesting implication if the self indication assumption (SIA) is right is that solipsism is much less likely correct than you previously thought, and relatedly the problem of other minds is less problematic.

Solipsists think they are unjustified in believing in a world external to their minds, as one only ever knows one’s own mind and there is no obvious reason the patterns in it should be driven by something else (curiously, holding such a position does not entirely dissuade people from trying to convince others of it). This can then be debated on grounds of whether a single mind imagining the world is more or less complex than a world causing such a mind to imagine a world.

The problem of other minds is that even if you believe in the outside world that you can see, you can’t see other minds. Most of the evidence for them is by analogy to yourself, which is only one ambiguous data point (should I infer that all humans are probably conscious? All things? All girls? All rooms at night time?).

SIA says many minds are more likely than one, given that you exist. Imagine you are wondering whether this is World 1, with a single mind among billions of zombies, or World 2, with billions of conscious minds. If you start off roughly uncertain, updating on your own conscious existence with SIA shifts the probability of world 2 to billions of times the probability of world 1.

Similarly for solipsism. Other minds probably exist. From this you may conclude the world around them does too, or just that your vat isn’t the only one.

21 responses to “SIA on other minds

  1. Does SIA also predict that the universe is infinite and contains an infinite many copies of myself?

    Or if you don’t want to deal with infinities, does SIA make it much more likely that the size of the total universe is much, much bigger than the size of the visible universe?

    Like

  2. I don’t think you’ve answered the problem here. Solipists in my experience don’t say that they are definitely the only selfa-ware person in the universe, they just say that they can’t *know* that anyone else is self-aware.
    Saying that there’s a higher probability that there are billions of self-aware minds than that there’s just one isn’t the same as proving that there is more than one self-aware mind.

    And of course, as we know nothing about the world generation process I don’t think anything reliable can be said about the probabilities of the different worlds.

    Like

  3. To me the Presumptuous Philosopher seems far more devastating than what even that paper allows for.

    The SIA seems to predict the universe should be, with overwhelming probability, maximally saturated with conscious entities – not just that there should be infinitely many minds, but that they should be infinitely dense. It is highly likely, by the SIA, that there is say a mind in every fundamental physical particle, or even at every point in spacetime, or in fact rather infinitely many such minds at every point; and that any a posteriori evidence we might have that suggests the contrary – say, for example, our entire understanding of science – is simply wrong. The world where our large finite number of observations of the actual universe allows us to form reasonable empirical theories can’t compete, in Bayesian terms, against a possible world where there are infinitely many observers everywhere, and the observations are simply all flawed.

    I’d say given the SIA, almost surely (in the precise statistical sense of an an outcome with probability one), we exist in a universe with the largest consistent transfinite cardinal number of minds.

    If you’re a pantheist of some sort, maybe that’s a positive outcome. “I think, therefore the universe possesses infinite consciousness to the greatest mathematically possible degree” – certainly its a more direct argument than Descartes proof of God.

    Like

  4. Jesper Östman

    Interesting argument. However, it only helps with the epistemological problem of other minds and not the conceptual problem (http://plato.stanford.edu/entries/other-minds/).

    Tracy: if one has a conception of knowledge that requires strict proof (from indubitable first principles?) leads to skepticism of just about anything.

    Like

    • Jesper – that’s been my experience too. And in my understanding that’s where solipists are coming from – skepticism of just about anything.

      Like

  5. mitchell porter

    Wikipedia defines SIA as

    “Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.”

    I see no logic to this at all. Why is that any more logical than this?

    “Given the fact that you are Australian, you should (other things equal) favor hypotheses according to which many Australians exist over hypotheses on which few Australians exist.”

    Like

    • “Given the fact that you are Australian, you should (other things equal) favor hypotheses according to which many Australians exist over hypotheses on which few Australians exist.”

      That doesn’t seem absurd to me. Select a random human on Earth and wipe their memory except for what their home country is and what other countries exist. If you then have them place a bet on which country is the most populous, their best strategy is to pick their own.

      Like

    • mitchell porter

      My current thinking is as follows. Assume for the moment that treating yourself as a typical observer (as in the doomsday argument) is logically valid. Then the problem with the maximal form of SIA (in which you are to think of *all* your properties as being common, not just your property of being an observer) is that the typical member of a set of complex entities does not share *all* its properties with the majority of the others. The typical member will actually be unique in some respects (e.g. by possessing some conjunction of properties) but commonplace in others. So typicality only implies that *something* about you is ordinary, but it doesn’t tell you what.

      Like

  6. I believe you have it backwards, because you are assuming that I could have been someone else.

    But by the “logic” of SIA, universes where I could not have been anybody else are more probable. (Because I am me, not someone else. This reasoning is identical to “we are on an inhabited world, not a lifeless one, therefore…”)

    SIA forces me to conclude that the universe most likely consists of quadrillions of exact copies of me, all brains in vats, and nothing else exists at all. (For example, according to SIA, a universe where half those brains are other people is only half as likely as the one I just described.)

    I would call this an extreme variant of solipsism, although I suppose it depends on your definition of “solipsism”.

    Like

  7. Jordan: I don’t think that extrapolation is valid. But on the other hand, I don’t think the SIA is a valid assumption, either, as typically stated.

    (If stated as “given that you exist and are self-aware, it’s likely that you didn’t come from nothing or from something completely not-self-aware, so there are almost certainly other self-aware beings”, it would be unremarkable.

    As stated, it seems to imply “since there’s at least 1 X, there ought to be lots and lots of X”, which doesn’t follow in itself.

    Mitchell’s objection also stands; we can’t, with logical backing, assume that our existence means that existence like ours must be more common than not.

    All we know is that intelligent life exists in at least one place.

    We can’t infer from that whether it’s more likely that it’s so rare that we’re probably the only one in our light cone, or so common that it’s surprising we haven’t already seen any.

    Like

  8. I think that the SIA could give here quite opposite result if the include here understading od Simaulations. Computer simulations of 1 person is much much cheaper then whole world simulation. (The same could be said about Bolzman brains). So we have much more soliptic simulations, then whole world simulation. So it is more likely that i am in soliptic simulation (and most observer are in simulation or in Bolzman brains).

    Like

  9. Have you considered the possibility that you might be the zombie? A philosophical zombie may not feel pain, but only reacts as if it does. Now, let’s say you are the zombie. Someone pokes you with a needle, and you react. In order to do so, some internal mechanism must also be triggered to stimulate the external appearance of reaction. Since you are the zombie and never knew the qualia of true pain felt by a real person, you interpret that mechanical reaction as genuine pain.

    Would you, as the zombie, never having experienced pain, know that it was not really pain as felt by a human?

    So now, SIA leaves us with the conclusion that we are all zombies.

    Like

    • mitchell porter

      Brian, you can’t be a zombie. A zombie is not just free of pain; there’s no-one home at all. No subjective experience in any form. Cogito ergo zombius non sum.

      Like

  10. Let us say that 1000 people have read this post and that there are 6.7 billion people currently living in the world.

    Given no other information, the odds are 99.9999851% that you have not read this post.

    For those that have read this post, congratulations on beating the odds!

    Like

  11. Pingback: To SIA or not to SIA? Link round-up. « Robert Wiblin

  12. Katia, I wrote a post in Lifeboat blog which may be intersting for you (and quote you):

    “Natural selection of universes and risks for the parrent civilization”
    http://lifeboat.com/blog/natural-selection-of-universes-and-risks-for-the-parrent-civilization

    Like

  13. If some one desires expert view regarding blogging and
    site-building then i recommend him/her to pay a quick visit this weblog,
    Keep up the good work.

    Like

  14. Wow, this post is good, my younger sister is analyzing these kinds of things, so
    I am going to let know her.

    Like

  15. Does your blog have a contact page? I’m having trouble locating it but, I’d
    like to send you an email. I’ve got some recommendations for your blog you might be interested in hearing. Either way, great site and I look forward to seeing it improve over time.

    Like

Comment!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s