SIA and the Two Dimensional Doomsday Argument

This post might be technical. Try reading this if I haven’t explained everything well enough.

When the Self Sampling Assumption (SSA) is applied to the Great Filter it gives something pretty similar to the Doomsday Argument, which is what it gives without any filter. SIA gets around the original Doomsday Argument. So why can’t it get around the Doomsday Argument in the Great Filter?

The Self Sampling Assumption (SSA) says you are more likely to be in possible worlds which contain larger ratios of people you might be vs.  people know you are not*.

If you have a silly hat, SSA says you are more likely to be in world 2 - assuming Worlds 1 and 2 are equally likely to exist (i.e. you haven't looked aside at your companions), and your reference class is people.

The Doomsday Argument uses the Self Sampling Assumption. Briefly, it argues that if there are many generations more humans, the ratio of people who might be you (are born at the same time as you) to people you can’t be (everyone else) will be smaller than it would be if there are few future generations of humans. Thus few generations is more likely than previously estimated.

An unusually large ratio of people in your situation can be achieved by a possible world having unusually few people unlike you in it or unusually many people like you, or any combination of these.

 

Fewer people who can't be me or more people who may be me make a possible world more likely according to SSA.

For instance on the horizontal dimension, you can compare a set of worlds which all have the same number of people like you, and different numbers of people you are not. The world with few people unlike you has the largest increase in probability.

 

Doomsday

The top row from the previous diagram. The Doomsday Argument uses possible worlds varying in this dimension only.

The Doomsday Argument is an instance of variation in the horizontal dimension only. In every world there is one person with your birth rank, but the numbers of people with future birth ranks differ.

At the other end of the spectrum you could be comparing  worlds with the same number of future people and vary the number of current people, as long as you are ignorant of how many current people there are.

The vertical axis. The number of people in your situation changes, while the number of others stays the same. The world with a lot of people like you gets the largest increase in probability.

This gives a sort of Doomsday Argument: the population will fall, most groups won’t survive.

The Self Indication Assumption (SIA) is equivalent to using SSA and then multiplying the results by the total population of people both like you and not.

In the horizontal dimension, SIA undoes the Doomsday Argument. SSA favours smaller total populations in this dimension, which are disfavoured to the same extent by SIA, perfectly cancelling.

[1/total] * total = 1
(in bold is SSA shift alone)

In vertical cases however, SIA actually makes the Doomsday Argument analogue stronger. The worlds favoured by SSA in this case are the larger ones, because they have more current people. These larger worlds are further favoured by SIA.

[(total – 1)/total]*total = total – 1

The second type of situation is relatively uncommon, because you will tend to know more about the current population than the future population. However cases in between the two extremes are not so rare. We are uncertain about creatures at about our level of technology on other planets for instance, and also uncertain about creatures at some future levels.

This means the Great Filter scenario I have written about is an in between scenario. Which is why the SIA shift doesn’t cancel the SSA Doomsday Argument there, but rather makes it stronger.

Expanded from p32 of my thesis.

——————————————-
*or observers you might be vs. those you are not for instance – the reference class may be anything, but that is unnecessarily complicated for the the point here.

5 responses to “SIA and the Two Dimensional Doomsday Argument

  1. Very interesting info, I am waiting for more. Keep updating your site and you will have a lot of readers!

  2. Interesting. Since reading Baxter’s Manifold Time i am obessed by the DA aka the Carter Catastrophe. Most say it’s Bollocks because its basic assumptions are wrong , ie. that we are a random sample of humans etc. But they fail to see that are only three possiblites for humanity :
    * a) Expansion: the continued increase of human numbers, presumably requiring an eventual move off-planet. Perhaps even the colonization of other star systems, and far in the future, of the entire galaxy itself.
    * b) Stabilization: we settle for resources on the Earth, and find a way to manage our numbers and our planet indefinitely.
    * c) Extinction: for whatever reason — asteroid impact, global war, biotech disaster, runaway nanotechnology — the human race dies out.
    Given the facts of Technology, vastly depleting resources and space on this planet a) remains totally in the realm of Scifi
    and even for b) you would have to be more optimistic than reason could justify, so c) is the only probable outcome! Only the timeframe is uncertain, i’d say human extinction within the next 5000-10000 years is very likely.

  3. Your nice diagram makes it clear that using simple counting, we can expect the probability of being in a world (given that we are one of the people wearing a silly hat) to simply be proportional to the number of people in silly hats in that world. Which, fortunately, is what I think you get – the horizontal cancels and the vertical matters linearly (i.e. 3,1 is 3 times more likely than 1,1).

    But I don’t think that contributes to a doomsday argument – once we know we live now, we expect a decrease in population on average (lacking any other information), since we guess that this box has an above average number of “now” people. But for any group, the decrease is only to the average, not to actual “doomsday.”

  4. Great post.

    What do you think about DA in the form of Gott’s formula, which was discussed in detail in the book “Apocalypses: when?” by Wells.

  5. Alexei Andreev

    I really liked this post, thank you! I’ve been thinking: can the Doomsday Argument be taken to mean that there are simply no humans in the far future. It doesn’t necessarily mean extinction of human values.
    For example, given that I am a human here and now, it’s likely that human population will decline after this point. However, it doesn’t mean that something else can’t replace humans. If we create FAI, the future will be weird and most likely we won’t have humans in the sense we have now (or we’ll have less and less humans overtime). It might be possible the the left bottom box is for all current humans and one massive brain. We are more likely to be a human than be the brain.
    Does this make sense?

Comment!

This site uses Akismet to reduce spam. Learn how your comment data is processed.