Tag Archives: Anthropics

Suspicious arguments regarding cow counting

People sometimes think that the doomsday argument is implausible because it always says we are more likely to die out sooner than our other reasoning suggests, regardless of the situation. There’s something dubious about an argument that has the same conclusion about the world regardless of any evidence about it. Nick Bostrom paraphrases, “But isn’t the probability that I will have any given rank always lower the more persons there will have been? I must be unusual in some respects, and any particular rank number would be highly improbable; but surely that cannot be used as an argument to show that there are probably only a few persons?” (he does not agree with this view).

That this reasoning is wrong is no new insight. Nick explains for instance that in any given comparison of different length futures, the doomsday reasoning doesn’t always give you the same outcome. You might have learned that your birth rank ruled out the shorter future. It remains the case though that the shift from whatever you currently believe to what the doomsday argument tells you to believe is always one toward shorter futures. I think it is this that seems fishy to people.

I maintain that the argument’s predictable conclusion is not a problem at all, and I would like to make this vivid.

Once a farmer owned a group of cows. He would diligently count them, to ensure none had escaped, and discover if there were any new calves. He would count them by lining them up and running his tape measure along the edge of the line.

“One thousand cows” he exclaimed one day. “Fifty new calves!”

His neighbour heard him from a nearby field, and asked what he was talking about. The farmer held out his tape measure. The incredulous neighbour explained that since cows are more than an inch long, his figures would need some recalculation. Since his cows were about five foot long on average, the neighbour guessed he would need to divide his number by 60. But the farmer quickly saw that this argument must be bogus. If his neighbour was right, whatever number of cows he had the argument would say he had fewer. What kind of argument would that be?

A similar one to the Doomsday Argument’s claim that the future should always be shorter than we otherwise think. In such cases the claim is that your usual method of dealing with evidence is biased, not that there is some particular uncommon evidence that you didn’t know about.

Similarly, the Self Indication Assumption‘s ‘bias’ toward larger worlds is taken as reason against it. Yet it is just a claim that our usual method is biased toward small worlds.

On the Anthropic Trilemma

Eliezer’s Anthropic Trilemma:

So here’s a simple algorithm for winning the lottery: Buy a ticket.  Suspend your computer program just before the lottery drawing – which should of course be a quantum lottery, so that every ticket wins somewhere.  Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery.  Then suspend the programs, merge them again, and start the result.  If you don’t win the lottery, then just wake up automatically. The odds of winning the lottery are ordinarily a billion to one.  But now the branch in which you win has your “measure”, your “amount of experience”, temporarily multiplied by a trillion.  So with the brief expenditure of a little extra computing power, you can subjectively win the lottery – be reasonably sure that when next you open your eyes, you will see a computer screen flashing “You won!”  As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn’t feel a thing.

See the original post for assumptions, what merging minds entails etc. He proposes three alternative bullets to bite: accepting that this would work, denying that there is “any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears” so undermining any question about what you should anticipate, and Nick Bostrom’s response, paraphrased by Eliezer:

…you should anticipate winning the lottery after five seconds, but anticipate losing the lottery after fifteen seconds. To bite this bullet, you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities.  If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1.  And if you lose the lottery, the subjective probability of having lost the lottery, ten seconds later, is ~1.  But we don’t have p(“experience win after 15s”) = p(“experience win after 15s”|”experience win after 5s”)*p(“experience win after 5s”) + p(“experience win after 15s”|”experience not-win after 5s”)*p(“experience not-win after 5s”).

I think I already bit the bullet about there not being a meaningful sense in which I won’t wake up as Britney Spears. However I would like to offer a better, relatively bullet biting free solution.

First notice that you will have to bite Bostrom’s bullet if you even accept Eliezer’s premise that arranging to multiply your ‘amount of experience’ in one branch in the future makes you more likely to experience that branch. Call this principle ‘follow-the-crowd’ (FTC). And let’s give the name ‘blatantly obvious principle’ (BOP) to the notion that P(I win at time 2) is equal to P(I win at time 2|I win at time 1)P(I win at time 1)+P(I win at time 2|I lose at time 1)P(I lose at time 1). Bostrom’s bullet is to deny BOP.

We can set aside the bit about merging brains together for now; that isn’t causing our problem. Consider a simpler and smaller (for the sake of easy diagramming) lottery setup where after you win or lose you are woken for ten seconds as a single person, then put back to sleep and woken as four copies in the winning branch or one in the losing branch. See the diagram below. You are at Time 0 (T0). Before Time 1 (T1) the lottery is run, so at T1 the winner is W1 and the loser is L1. W1 is then copied to give the multitude of winning experiences at T2, while L2 remains single.

Now using the same reasoning as you would to win the lottery before, FTC, you should anticipate an 80% chance of winning the lottery at T2. There is four times as much of your experience winning the lottery as not then. But BOP says you still only have a fifty percent chance of being a lottery winner at T2:

P(win at T2) = P(win at T2|win at T1).P(win at T1)+P(win at T2|lose at T1).P(lose at T1) = 1 x 1/2 + 0 x 1/2 = 1/2

FTC and BOP conflict. If you accept that you should generally anticipate futures where there are more of you more strongly, it looks like you accept that P(a) does not always equal P(a|b)P(b)+P(a|-b)P(-b). How sad.

Looking at the diagram above, it is easy to see why these two methods of calculating anticipations disagree.  There are two times in the diagram that your future branches, once in a probabilistic event and once in being copied. FTC and BOP both treat the probabilistic event the same: they divide your expectations between the outcomes according to their objective probability. At the other branching the two principles do different things. BOP treats it the same as a probabilistic event, dividing your expectation of reaching that point between the many branches you could continue on. FTC treats it as a multiplication of your experience, giving each new branch the full measure of the incoming branch. Which method is correct?

Neither. FTC and BOP are both approximations of better principles. Both of the better principles are probably true, and they do not conflict.

To see this, first we should be precise about what we mean by ‘anticipate’. There is more than one resolution to the conflict, depending on your theory of what to anticipate: where the purported thread of personal experience goes, if anywhere. (Nope, resolving the trilemma does not seem to answer this question).

Resolution 1: the single thread

The most natural assumption seems to be that your future takes one branch at every intersection. It does this based on objective probability at probabilistic events, or equiprobably at copying events. It follows BOP. This means we can keep the present version of BOP, so I shall explain how we can do without FTC.

Consider diagram 2. If your future takes one branch at every intersection, and you happen to win the lottery, there are still many T2 lottery winners who will not be your future. They are your copies, but they are not where your thread of experience goes. They and your real future self can’t distinguish who is actually in your future, but there is some truth of the matter. It is shown in green.

Diagram 2

Now while there are only two objective possible worlds, when we consider possible paths for the green thread there are five possible worlds (one shown in diagram 2). In each one your experience follows a different path up the tree. Since your future is now distinguished from other similar experiences, we can see the weight of your experience at Tin a world where you win is no greater than the weight in a world where you lose, though there are always more copies who are not you in the world where you win.

The four worlds where your future is in a winning branch are each only a quarter as likely as one where you lose, because there is a fifty percent chance of you reaching W1, and after that a twenty five percent chance of reaching a given W2. By the original FTC reasoning then, you are equally likely to win or lose. More copies just makes you less certain exactly where it will be.

I am treating the invisible green thread like any other hidden characteristic. Suppose you know that you are and will continue to be the person with the red underpants, though many copies will be made of you with green underpants. However many extra copies are made, a world with more of them in future should not get more of your credence, even if you don’t know which future person actually has the red pants. If you think of yourself as having only one future, then you can’t also consider there to be a greater amount of your experience when there are a lot of copies. If you did anticipate experiences based on the probability that many people other than you were scheduled for that experience, you would greatly increase the minuscule credence you have in experiencing being Britney Spears when you wake up tomorrow.

Doesn’t this conflict with the use of FTC to avoid the Bolzmann brain problem, Eliezer’s original motivation for accepting it? No. The above reasoning means there is a difference between where you should anticipate going when you are at T0, and where you should think you are if you are at T2.

If you are at T0 you should anticipate a 50% chance of winning, but if you are at T2 you have an 80% chance of being a winner. Sound silly? That’s because you’ve forgotten that you are potentially talking about different people. If you are at T2, you are probably not the future of the person who was at T0, and you have no way to tell. You are a copy of them, but their future thread is unlikely to wend through you. If you knew that you were their future, then you would agree with their calculations.

That is, anyone who only knows they are at T2 should consider themselves likely to have won, because there are many more winners than losers. Anyone who knows they are at T2 and are your future, should give even odds to winning. At T0, you know that the future person whose measure you are interested in is at T2 and is your future, so you also give even odds to winning.

Avoiding the Bolzmann brain problem requires a principle similar to FTC which says you are presently more likely to be in a world where there are more people like you. SIA says just that for instance, and there are other anthropic principles that imply similar things. Avoiding the Bolzmann brain problem does not require inferring from this that your future lies in worlds where there are more such people. And such an inference is invalid.

This is exactly the same as how it is invalid to infer that you will have many children from the fact that you are more likely to be from a family with many children. Probability theory doesn’t distinguish between the relationship between you and your children and the relationship between you and your future selves.

Resolution 2

You could instead consider all copies to be your futures. Your thread is duplicated when you are. In that case you should treat the two kinds of branching differently, unlike BOP, but still not in the way FTC does. It appears you should anticipate a 50% chance of becoming four people, rather than an 80% chance of becoming one of those people. There is no sense in which you will become one of the winners rather than another. Like in the last case, it is true that if you are presently one of the copies in the future, you should think yourself 80% likely to be a winner. But again ‘you’ refers to a different entity in this case to the one it referred to before the lottery. It refers to a single future copy. It can’t usefully refer to a whole set of winners, because the one considering it does not know if they are part of that set or if they are a loser. As in the last case, your anticipations at T0 should be different from your expectations for yourself if you know only that you are in the future already.

In this case BOP gives us the right answer for the anticipated chances of winning at T0. However it says you have a 25% chance of becoming each winner at T2 given you win at T1, instead of 100% chance of becoming all of them.

Resolution 3:

Suppose that you want to equate becoming four people in one branch as being more likely to be there. More of your future weight is there, so for some notions of expectation perhaps you expect to be there. You take ‘what is the probability that I win the lottery at T1?’ to mean something like ‘what proportion of my future selves are winning at T1?’. FTC gives the correct answer to this question – you aren’t especially likely to win at T1, but you probably will at T2. Or in the original problem, you should expect to win after 5 seconds and lose after 15 seconds, as Nick Bostrom suggested. If FTC is true, then we must scrap BOP. This is easier than it looks because BOP is not what it seems.

Here is BOP again:

P(I win at T2) is equal to P(I win at T2|I win at T1)P(I win at T1)+P(I win at T2|I lose at T1)P(I lose at T1)

It looks like a simple application of

P(a) = P(a|b)P(b)+P(a|-b)P(-b)

But here is a more extended version:

P(win at 15|at 15) = P(win an 15|at 15 and came from win at 5)P(win at 5|at 5)+P(win at 15|at 15 and came from loss at 5)P(lose at 5|at 5)

This is only equal to BOP if the probability of having a win at 5 in your past when you are at 15 is equal to the probability of winning at 5 when you are at 5. To accept FTC is to deny that. FTC says you are more likely to find the win in your past than to experience it because many copies are descended from the same past. So accepting FTC doesn’t conflict with P(a) being equal to P(a|b)P(b)+P(a|-b)P(-b), it just makes BOP an inaccurate application of this true principle.

In summary:

1. If your future is (by definitional choice or underlying reality) a continuous non-splitting thread, then something like SIA should be used instead of FTC, and BOP holds. Who you anticipate being differs from who you should think you are when you get there. Who you should think you are when you get there remains as something like SIA and avoids the Bolzmann brain problem.

2. If all your future copies are equally your future, you should anticipate becoming a large number of people with the same probability as that you would have become one person if there were no extra copies. In which case FTC does not hold, because you expect to become many people with a small probability instead of one of those many people with a large probability. BOP holds in a modified form where it doesn’t treat being copied as being sent down a random path. But if you want to know what a random moment from your future will hold, a random moment from T1 is more likely to include losing than a random moment from T2. For working out what a random T2 moment will hold, BOP is a false application of a correct principle.

3. If for whatever reason you conceptualise yourself as being more likely to go into future worlds based on the number of copies of you there are in those worlds, then FTC does hold, but BOP becomes false.

I think the most important point is that the question of where you should anticipate going need not have the same answer as where a future copy of you should expect to be (if they don’t know for some reason). A future copy who doesn’t know where they are should think they are more likely to be in world where there are many people like themselves, but you should not necessarily think you are likely to go into such a world. If you don’t think you are as likely to go into such a world, then FTC doesn’t hold. If you do, then BOP doesn’t hold.

It seems to me the original problem uses FTC while assuming there will be a single thread, thereby making BOP look inevitable. If the thread is kept, FTC should not be, which can be conceptualised as in either of resolutions 1 or 2. If FTC is kept, BOP need not be, as in resolution 3. Whether you keep FTC or BOP will give you different expectations about the future, but which expectations are warranted is a question for another time.

Person moments make sense of anthropics

Often people think that various forms of anthropic reasoning require you to change your beliefs in ways other than conditionalizing on evidence. This is false, at least in the cases I know of. I shall talk about Frank Arntzenius‘ paper Some Problems for Conditionalization and Reflection [gated] because it explains the issue well, though I believe his current views agree with mine.

He presents five thought experiments: Two Roads to Shangri La, The Prisoner, John Collins’s Prisoner, Sleeping Beauty and Duplication. In each of them, it seems the (arguably) correct answer violates van Fraassen’s reflection principle, which basically says that if you expect to believe something in the future without having been e.g. hit over the head between now and then, you should believe it now. For instance the thirder position in Sleeping Beauty seems to violate this principle because before the experiment Beauty believes there is a fifty percent chance of heads, and that when she wakes up she will think there is a thirty three percent chance. Arntzenius argued that these seemingly correct answers really are the correct ones, and claimed that they violate the reflection principle because credences can evolve in two ways other than by conditionalization.

First he said credences can shift, for instance through time. I know that tomorrow I will have a higher credence in it being Monday than I do today, and yet it would not be rational for me to increase my credence in it being Monday now on this basis. They can also ‘spread out’. For instance if you know you are in Fairfax today, and that tomorrow a perfect replica of your brain experiencing Fairfax will be made and placed in a vat in Canberra, tomorrow your credence will go from being concentrated in Fairfax to being spread between there and Canberra. This is despite no damage having been done to your own brain. As Arntzenius pointed out, such an evolution of credence looks like quite the opposite of conditionalization, since conditionalization consists of striking out possibilities that your information excludes – it never opens up new possibilities.

I agree that beliefs should evolve in these two ways. However they are both really conditionalization, just obscured. They make sense as conditionalization when you think of them as carried out by different momentary agents, based on the information they infer from their connections to other momentary agents with certain beliefs (e.g. an immediately past self).

Normal cases can be considered this way quite easily. Knowing that you are the momentary agent that followed a few seconds after an agent who knew a certain set of facts about the objective world, and who is (you assume) completely trustworthy, means you can simply update the same prior with those same facts and come to the same conclusion. That is, you don’t really have to do anything. You can treat a stream of moments as a single agent. This is what we usually do.

However sometimes being connected in a certain way to another agent does not make everything that is true for them true for you. Most obviously, if they are a past self and know it is 12 o clock, your connection via being their one second later self means you should exclude worlds where you are not at time 12:00:01. You have still learned from your known relationship to that agent and conditionalized, but you have not learned that what is true of them is true of you, because it isn’t. This is the first way Arntzenius mentioned that credences seem to evolve through time not by by conditionalization.

The second way occurs when one person-moment is at location X, and another person moment has a certain connection to the person at X, but there is more than one possible connection of that sort. For instance when two later people both remember being an earlier person because the earlier person was replicated in some futuristic fashion. Then while the earlier person moment could condition on their exact location, the later one must condition on being in one of several locations connected that way to the earlier person’s location, so their credence spreads over more possibilities than that of the earlier self. If you call one of these later momentary agents the same person as the earlier one, and say they are conditionalizing, it seems they are doing it wrong. But considered as three different momentary people learning from their connections they are just conditionalizing as usual.

What exactly the later momentary people should believe is a matter of debate, but I think that can be framed entirely as a question of what their state spaces and priors look like.

Momentary humans almost always pass lots of information from one to the next, chronologically along chains of memory through non-duplicated people, knowing their approximate distance from one another. So most of the time they can treat themselves as single units who just have to update on any information coming from outside, as I explained. But conditionalization is not specific to these particular biological constructions; and when it is applied to information gained through other connections between agents, the resulting time series of beliefs within one human will end up looking different to that in a chain with no unusual extra connections.

This view also suggests that having cognitive defects, such as memory loss, should not excuse anyone from having credences, as for instance Arntzenius argued it should in his paper Reflections on Sleeping Beauty: “in the face of forced irrational changes in one’s degrees of belief one might do best simply to jettison them altogether”. There is nothing special about credences derived from beliefs of a past agent you identify with. They are just another source of information. If the connection to other momentary agents is different to usual, for instance through forced memory loss, update on it as usual.

Agreement on anthropics

Aumann’s agreement theorem says that Bayesians with common priors who know one another’s posteriors must agree. There’s no apparent reason this shouldn’t apply to posteriors arrived at using indexical information. This does not mean that you and I should both believe we are as likely to be the author of this blog, but that we should agree on the chances that I am.

The Self-Sampling Assumption (SSA) does not allow for this agreement between people with different reference classes, as I shall demonstrate. Consider the figure below. Suppose A people and B people both begin with an equal prior over the two worlds. Everyone knows their type (A or B), but other than that they do not know their location. For instance an A person may be in any of eight places, as far as they know. A people consider their reference class to be A people only. B people consider their reference class to be B people only. The people who are standing next to each other in the diagram meet and exchange their knowledge. For instance an A person meeting a B person will learn that the B person is a B person, and that they don’t know anything much else.

When A people meet B people, they both come to know what the other person’s posterior is. For instance an A person who meets a B person knows that the B person doesn’t know anything except that they are a B person who met an A person. From this the A person can work out the B person’s posterior over which world they are in.

Suppose everyone uses SSA. When an A person and a B person meet, the A people come to think they are four times as likely to be in World 1. This is because in world two, only a quarter of A people meet a B person, whereas in world 1 they all do. The B people they meet cannot agree – in either world they expected to talk with an A person, and for that A person to be pretty sure they are in world 1. So despite knowing one another’s posteriors and having common priors over which world exists, the A and B people who meet must disagree. Not only on one another’s locations within the world, but over which world they are in*.

An example of this would be a husband and wife celebrating their wedding in a Chinese town with poor census data and an ongoing gender gap. The husband exclaims ‘wow, I am a husband! The disparity between gender populations in this town is probably smaller than I thought’. His wife expected in any case that she would end up with a husband who would make this inference from their marriage, and so cannot update and agree with him. Notice that neither partner need think the other has chosen the ‘wrong’ reference class in any way, it might be the reference class they would have chosen were they in that alternative indexical position.

In both of these cases the Self-Indication Assumption (SIA) allows for perfect agreement. Recall SIA weights the probability of worlds by the number of people in them in your situation. When A and B knowingly communicate, they are in symmetrical positions – either side of a communicating A and B pair. Both parties weight their hypotheses by the number of such pairs, and so they agree. Incidentally, when they first found out that they existed, and later when they learned their type, they did disagree. Communicating resolves this, instead of creating a disagreement as with SSA.

*If this does not seem bad enough, they each agree that the other person reasoned as well as they did.

Another implausible implication of this application of SSA is that you will come to agree with creatures that are more similar to you, even if you are certain that a given creature inside your reference class is identical to one outside your reference class in every aspect of its data collection and inference abilities.

The Unpresumptuous Philosopher

Nick Bostrom showed that either position in Extreme Sleeping Beauty seems absurd, then gave a third option. I argued that his third option seems worse than either of the original pair. If I am right there that  the  case  for Bayesian  conditioning without updating on  evidence  fails, we have  a  choice  of  disregarding  Bayesian  conditioning in at least some situations,  or  distrusting the aversion to extreme updates as in Extreme Sleeping Beauty. The latter seems the necessary choice, given the huge disparity in evidence supporting Bayesian conditioning and that supporting these particular intuitions about large updates and strong beliefs.

Notice that both the Halfer and Thirder positions on Extreme Sleeping Beauty have very similar problems. They are seemingly opposed by the same intuitions against extreme certainty in situations where we don’t feel certain, and extreme updates in situations where we hardly feel we have any evidence. Either before or after discovering you are in the first waking, you must be very sure of how the coin came up. And between ignorance of the day and knowledge, you must change your mind drastically. If we must choose one of these positions then, it is not clear which is preferable on these grounds alone.

Now notice that the Thirder position in Extreme Sleeping Beauty is virtually identical to SIA and consequently the Presumptuous Philosopher’s position (as Nick explains, p64). From Anthropic Bias:

 

The Presumptuous Philosopher

39It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion, trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion, trillion, trillion observers. The super-duper symmetry considerations are indifferent between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1 (whereupon the philosopher [...] appeals to SIA)!”

The Presumptuous Philosopher is like the Extreme Sleeping Beauty Thirder because they are both in one of two possible worlds with a known probability of existing, one of which has a much larger population than the other. They are both wondering which of these worlds they are in.

Is the Presumptuous Philosopher really so presumptuous? Analogous to the Extreme Sleeping Beauty Halfer then shall be the Unpresumptuous Philosopher. When the Unpresumptuous Philosopher  learns  there  are  a  trillion  times  as many  observers  in T2  she  remains  cautiously unmoved. However, when the physicists later discover where in the cosmos our planet is under  both  theories,  the  Unpresumptuous  Philosopher  becomes  virtually  certain  that  the sparsely populated T1 is correct while the Presumptuous Philosopher hops back on the fence.

The Presumptuous Philosopher is often chided for being sure the universe is infinite, given there is some chance of an infinite universe existing. It should be noted that this is only as long as he cannot restrict his possible locations in it to any finite region. The Unpresumptuous Philosopher is uncertain under such circumstances. However she believes with probability one that we are in a finite world if she knows her location is within any finite region. For instance if she knows the age  of  her spatially finite universe  she  is  certain  that  it will  not  continue  for  infinitely  long. Here her presumptuous friend is quite unsure.

Statue of an unknown Cynic philosopher from th...

This philosopher has a nice perch now, but where will he go if evidence moves him? Photo: Yair Haklai

It seems to me that as the two positions on Extreme Sleeping Beauty are as unintuitive as each other, the two philosophers seem as presumptuous as each other. The accusation of inducing a large probability shift and encouraging ridiculous certainty is hardly an argument that can be used against the SIA-Thirder-Presumptuous Philosopher position in favor of the SSA-Halfer-Unpresumptuous Philosopher side. Since the Presumptuous Philosopher is usually considered the big argument against SIA, and not considered an argument against SSA at all, an update in favor of SIA is in order.

Sleeping Beauty should remain pure

Sleeping Beauty

Image via Wikipedia

Consider the Sleeping Beauty Problem. Sleeping Beauty is put to sleep on Sunday night. A coin is tossed. If it lands heads, she is awoken once on Monday, then sleeps until the end of the experiment. If it lands tails, she is woken once on Monday, drugged to remove her memory of this event, then awoken once on Tuesday, before sleeping till the end of the experiment. The awakenings during the experiment are indistinguishable to Beauty, so when she awakens she doesn’t know what day it is or how the coin fell. The question is this: when Beauty wakes up on one of these occasions, how confident should she be that heads came up?

There are two popular answers, 1/2 and 1/3. However virtually everyone agrees that if Sleeping Beauty should learn that it is Monday, her credence in Tails should be reduced by half, from whatever it was initially. So ‘Halfers’ come to think heads has a 2/3 chance, and ‘Thirders’ come to think they heads is as likely as tails. This is the standard Bayesian way to update, and is pretty uncontroversial.

Now consider a variation on the Sleeping Beauty Problem where Sleeping Beauty will be woken up one million times on  tails  and  only once on heads.  Again, the  probability  you  initially  put  on  heads  is  determined  by  the reasoning  principle  you  use,  but  the  probability  shift  if you are to  learn  that  you are in the first awakening will be the same either way. You will have to shift your odds by a million to one toward heads. Nick Bostrom points out that in this scenario, either before or after  this  shift you will have  to be extremely certain either of heads or of tails, and that such extreme certainty seems intuitively unjustified, either before or after knowing you are experiencing the first wakening.

Extreme Sleeping Beauty wakes up a million times on tails or once on heads. There is no choice of 'a' which doesn't lead to extreme certainty either before or after knowing she is at her first waking.

However the only alternative to this certainty is for Sleeping Beauty to  keep  odds  near  1:1 both  before  and  after  she learns she is at her first waking. This entails apparently giving up Bayesian conditionalization. Having excluded 99.9999% of the situations she may have been in where tails would have come up, Sleeping Beauty retains her previous credence in tails.

This is what Nick proposes doing however: his ‘hybrid model’ of  Sleeping  Beauty.  He  argues  that  this  does  not  violate  Bayesian conditionalization in cases such as this because Sleeping Beauty is in different indexical positions before and after knowing that she is at her first waking, so her observer-moments (thick time-slices of a person) at the different times need not agree.

I disagree, as I shall explain. Briefly, the disagreement between different observer-moments should not occur and is deeper than it first seems, the existing arguments against so called non-indexical conditioning also fall against the hybrid model, and Nick fails in his effort to show that Beauty won’t predictably lose money gambling.

Is hybrid Beauty Bayesian?

Nick argues first that a Bayesian may accept having 50:50 credences both before and after knowing that it is Monday, then claims that one should do so, given the absurdities of the Extreme Sleeping Beauty problem above and variants of it. His argument for the first part is as follows (or see p10). There are actually five rather than three relevant indexical positions in the Sleeping Beauty Problem. The extra two are Sleeping Beauty after she knows it is Monday under both heads and tails. He explains that it is the ignorant Beauties who should think the chance of Heads is half, and the informed Mondayers who should think the chance of Heads is still half conditional on it being Monday. Since these are observer-moments in different locations, he claims there is no inconsistency, and Bayesian conditionalization is upheld (presumably meaning that each observer-moment has a self-consistent set of beliefs).

He generalizes that one need not believe P(X) = A, just because one used to think P(X|E) = A and one just learned E. For that to be true the probability of X given that you don’t know E but will learn it would have to be equal to the probability of X given that you do know E but previously did not. Basically, conditional probabilities must not suddenly change just as you learn the conditions hold.

Why exactly a conditional probability might do this is left to the reader’s imagination. In this case Nick infers that it must have happened somehow, as no apparently consistent set of beliefs will save us from making strong updates in the Extreme Sleeping Beauty case and variations on it.

If receiving new evidence gives one leave to break consistency with any previous beliefs on grounds that ones conditional credences may have  changed with ones location, there would be little left of Bayesian conditioning in practice. Normal Bayesian conditioning is remarkably successful then, if we are to learn that a huge range of other inferences were equally well supported in any case of its use.

Nick’s calling Beauty’s unchanging belief in even odds consistent for a Bayesian is not because these beliefs meet some sort of Bayesian constraint, but because he is assuming there are not constraints on the relationship between the beliefs of different Bayesian observer-moments. By this reasoning, any set of internally consistent belief sets can be ‘Bayesian’. In the present case we chose our beliefs by a powerful disinclination toward making certain updates. We should admit it is this intuition driving our probability assignments then, and not call it a variant of Bayesianism. And once we have stopped calling it Bayesianism, we must ask if the intuitions that motivate it really have the force behind them that the intuitions supporting Bayesianism in temporally extended people do.

Should observer-moments disagree?

Nick’s argument works by distinguishing every part of Beauty with different information as a different observer. This is used to allow them to safely hold inconsistent beliefs with one another. So this argument is defeated if Bayesians should agree with one another, when they know one anothers’ posteriors, share priors and know one another to be rational. Aumann‘s agreement theorem does indeed show this. There is a slight complication in that the disagreement is over probabilities conditional on different locations, but the locations are related in a known way, so it appears they can be converted to disagreement over the same question. For instance past Beauty has a belief about the probability of heads conditional on her being followed by a Beauty who knows it is Monday, and Future Beauty has a belief conditional on the Beauty in her past being followed by one who knows it is Monday (which she now knows it was).

Intuitively, there is still only one truth, and consistency is a tool for approaching it. Dividing people into a lot of disagreeing parts so that they are consistent by some definition is like paying someone to walk your pedometer in order to get fit.

Consider the disagreement between observer-moments in more detail. For  instance,  suppose  before  Sleeping  Beauty  knows what  day  it  is  she assigns  50  percent  probability  to  heads  having  landed.  Suppose  she  then  learns  that  it  is Monday, and still believes she has a 50 percent chance of heads. Lets call the ignorant observer-moment Amy and the later moment who knows it is Monday Betty.

Amy and Betty do not merely come to different conclusions with different indexical  information. Betty believes Amy was wrong, given only the information Amy had. Amy thought that conditional on being followed by an observer-moment who knew it was Monday, the chances of Heads were 2/3. Betty knows this, and knows nothing else except that Amy was indeed followed by an observer-moment who knows it is Monday, yet believes the chances of heads are in fact half. Betty agrees with the reasoning principle Amy used. She also agrees with Amy’s priors. She agrees that were she in Amy’s position, she would have the same beliefs Amy has. Betty also knows that though her location in the world  has changed, she is in the same objective world as Amy – either Heads or Tails came up for both of them. Yet Betty must knowingly disagree with Amy about how likely that  world is to be one where Heads landed. Neither Betty nor Amy can argue that her belief about their shared world is more likely to be correct than the other’s. If this principle is even a step in the right direction then, these observer-moments could do better by aggregating their apparently messy estimates of reality.

Identity with other unlikely anthropic principles

Though I don’t think Nick mentions it, the hybrid model reasoning is structurally identical to SSSA using the reference class of ‘people with exactly one’s current experience’, both before and after  receiving  evidence  (different  reference  classes  in  each  case since they have different information). In both cases every member of Sleeping Beauty’s reference class shares the same experience. This means the proportion of her reference class who share her current experiences is always one. This allows Sleeping Beauty to stick with the fifty percent chance given by  the coin, both before and after knowing she is in her first waking, without any interference from changing potential locations.

SSSA with such a narrow reference class is exactly analogous to non-indexical conditioning, where ‘I observe X’ is interpreted as ‘X is observed by someone in the world’. Under both, possible worlds where your experience occurs nowhere are excluded and all other worlds retain their prior probablities, normalized. Nick has criticised non-indexical conditioning because it leads to an inability to update on most evidence, thus prohibiting science for instance. Since most people are quite confident that it is possible to do science, they are implicitly confident that non-indexical conditioning is well off the mark. This implies that SSSA using the narrowest reference class is just as implausible, except that it may be more readily traded for SSSA with other reference classes when it gives unwanted results. Nick has suggested SSA should be used with a broader reference class for this reason (e.g. see Anthropic Bias p181), though he also supports using different reference classes at different times.

These reasoning principles are more appealing in the Extreme Sleeping Beauty case, because our intuition there is to not update on evidence. However if we pick different principles for different circumstances according to which conclusions suit us, we aren’t using those principles, we are using our intuitions. There isn’t necessarily anything inherently wrong with using intuitions, but when there are reasoning principles available that have been supported by a mesh of intuitively correct reasoning and experience, a single untested intuition would seem to need some very strong backing to compete.

Beauty will be terrible at gambling

It first seems that Hybrid Beauty can be Dutch-Booked (offered a collection of bets she would accept and which would lead to certain loss for her), which suggests she is being irrational. Nick gives an example:

Upon awakening, on both Monday and Tuesday,
before either knows what day it is, the bookie offers Beauty the following bet:

Beauty gets $10 if HEADS and MONDAY.
Beauty pays $20 if TAILS and MONDAY.
(If TUESDAY, then no money changes hands.)

On Monday, after both the bookie and Beauty have been informed that it is
Monday, the bookie offers Beauty a further bet:

Beauty gets $15 if TAILS.
Beauty pays $15 if HEADS.

If Beauty accepts these bets, she will emerge $5 poorer.

Nick argues that Sleeping Beauty should not accept the first bet, because the bet will have to be made twice if tails comes up and only once if heads does, so that Sleeping Beauty isn’t informed about which waking she is in by whether she is offered a bet. It is known that when a bet on A vs. B will be made more times conditional on A than conditional on B, it can be irrational to bet according to the odds you assign to A vs. B. Nick illustrates:

…suppose you assign credence 9/10 to the proposition that the trillionth digit in the decimal expansion of π is some number other than 7. A man from the city wants to bet against you: he says he has a gut feeling that the digit is number 7, and he offers you even odds – a dollar for a dollar. Seems fine, but there is a catch: if the digit is number 7, then you will have to repeat exactly the same bet with him one hundred times; otherwise there will just be one bet. If this proviso is specified in the contract, the real bet that is being offered you is one where you get $1 if the digit is not 7 and you lose $100 if it is 7.

However  in these cases the problem stems from the bet being paid out many times under one circumstance. Making extra bets that will never be paid out cannot affect the value of a set of bets. Imagine the aforementioned city man offered his deal, but added that all the bets other than your first one would be called off once you had made your first one. You would be in the same situation as if the bet had not included his catch to begin with. It would be an ordinary bet, and you should be willing to bet at the obvious odds. The same goes for Sleeping Beauty.

We can see this more generally. Suppose E(x) is the expected value of x, P(Si) is probability of situation i arising, and V(i) is the value to you if it arises. A bet consists of a set of gains or losses to you assigned to situations that may arise.

E(bet) = P(S1)*V(S1) + P(S2)*V(S2) + …

The City Man’s offered bet is bad because it has a large number of terms with negative value and relatively high probability, since they occur together rather than being mutually exclusive in the usual fashion. It is a trick because it is presented at first as if there were only one term with negative value.

Where bets will be written off in certain situations, V(Si) is zero in the terms corresponding to those situations, so the whole terms are also zero, and may as well not exist. This means the first bet Sleeping Beauty is offered in her Dutch-booking test should be made at the same odds as if she would only bet once on either coin outcome. Thus she should take the bet, and will be Dutch booked.

Conclusion

In sum, Nick’s hybrid model is not a new kind of Bayesian updating, but use of a supposed loophole where Bayesianism is supposed to have few requirements. There doesn’t even seem to be a loophole there however, and if there were it would be a huge impediment to most practical uses of updating. Reasoning principles which are arguably identical to the hybrid model in the relevant ways have been previously discarded by most due to their obstruction of science among other things.  Last, Sleeping Beauty really will lose bets if she adopts the hybrid model and is otherwise sensible.

SIA and the Two Dimensional Doomsday Argument

This post might be technical. Try reading this if I haven’t explained everything well enough.

When the Self Sampling Assumption (SSA) is applied to the Great Filter it gives something pretty similar to the Doomsday Argument, which is what it gives without any filter. SIA gets around the original Doomsday Argument. So why can’t it get around the Doomsday Argument in the Great Filter?

The Self Sampling Assumption (SSA) says you are more likely to be in possible worlds which contain larger ratios of people you might be vs.  people know you are not*.

If you have a silly hat, SSA says you are more likely to be in world 2 - assuming Worlds 1 and 2 are equally likely to exist (i.e. you haven't looked aside at your companions), and your reference class is people.

The Doomsday Argument uses the Self Sampling Assumption. Briefly, it argues that if there are many generations more humans, the ratio of people who might be you (are born at the same time as you) to people you can’t be (everyone else) will be smaller than it would be if there are few future generations of humans. Thus few generations is more likely than previously estimated.

An unusually large ratio of people in your situation can be achieved by a possible world having unusually few people unlike you in it or unusually many people like you, or any combination of these.

 

Fewer people who can't be me or more people who may be me make a possible world more likely according to SSA.

For instance on the horizontal dimension, you can compare a set of worlds which all have the same number of people like you, and different numbers of people you are not. The world with few people unlike you has the largest increase in probability.

 

Doomsday

The top row from the previous diagram. The Doomsday Argument uses possible worlds varying in this dimension only.

The Doomsday Argument is an instance of variation in the horizontal dimension only. In every world there is one person with your birth rank, but the numbers of people with future birth ranks differ.

At the other end of the spectrum you could be comparing  worlds with the same number of future people and vary the number of current people, as long as you are ignorant of how many current people there are.

The vertical axis. The number of people in your situation changes, while the number of others stays the same. The world with a lot of people like you gets the largest increase in probability.

This gives a sort of Doomsday Argument: the population will fall, most groups won’t survive.

The Self Indication Assumption (SIA) is equivalent to using SSA and then multiplying the results by the total population of people both like you and not.

In the horizontal dimension, SIA undoes the Doomsday Argument. SSA favours smaller total populations in this dimension, which are disfavoured to the same extent by SIA, perfectly cancelling.

[1/total] * total = 1
(in bold is SSA shift alone)

In vertical cases however, SIA actually makes the Doomsday Argument analogue stronger. The worlds favoured by SSA in this case are the larger ones, because they have more current people. These larger worlds are further favoured by SIA.

[(total - 1)/total]*total = total – 1

The second type of situation is relatively uncommon, because you will tend to know more about the current population than the future population. However cases in between the two extremes are not so rare. We are uncertain about creatures at about our level of technology on other planets for instance, and also uncertain about creatures at some future levels.

This means the Great Filter scenario I have written about is an in between scenario. Which is why the SIA shift doesn’t cancel the SSA Doomsday Argument there, but rather makes it stronger.

Expanded from p32 of my thesis.

——————————————-
*or observers you might be vs. those you are not for instance – the reference class may be anything, but that is unnecessarily complicated for the the point here.

SIA says AI is no big threat

Artificial Intelligence could explode in power and leave the direct control of humans in the next century or so. It may then move on to optimize the reachable universe to its goals. Some think this sequence of events likely.

If this occurred, it would constitute an instance of our star passing the entire Great Filter. If we should cause such an intelligence explosion then, we are the first civilization in roughly the past light cone to be in such a position. If anyone else had been in this position, our part of the universe would already be optimized, which it arguably doesn’t appear to be. This means that if there is a big (optimizing much of the reachable universe) AI explosion in our future, the entire strength of the Great Filter is in steps before us.

This means a big AI explosion is less likely after considering the strength of the Great Filter, and much less likely if one uses the Self Indication Assumption (SIA).

The large minimum total filter strength contained in the Great Filter is evidence for larger filters in the past and in the future. This means evidence against the big AI explosion scenario, which requires that the future filter is tiny.

SIA implies that we are unlikely to give rise to an intelligence explosion for similar reasons, but probably much more strongly. As I pointed out before, SIA says that future filters are much more likely to be large than small. This is easy to see in the case of AI explosions. Recall that SIA increases the chances  of hypotheses where there are more people in our present situation. If we precede an AI explosion, there is only one civilization in our situation, rather than potentially many if we do not. Thus the AI hypothesis is disfavored (by a factor the size of the extra filter it requires before us).

What the Self Sampling Assumption (SSA), an alternative principle to SIA, says depends on the reference class. If the reference class includes AIs, then we should strongly not anticipate such an AI explosion. If it does not, then we strongly should (by the doomsday argument). These are both basically due to the Doomsday Argument.

In summary, if you begin with some uncertainty about whether we precede an AI explosion, then updating on the observed large total filter and accepting SIA should make you much less confident in that outcome. The Great Filter and SIA don’t just mean that we are less likely to peacefully colonize space than we thought, they also mean we are less likely to horribly colonize it, via an unfriendly AI explosion.

Light cone eating AI explosions are not filters

Some existential risks can’t account for any of the Great Filter. Here are two categories of existential risks that are not filters:

Too big: any disaster that would destroy everyone in the observable universe at once, or destroy space itself, is out. If others had been filtered by such a disaster in the past, we wouldn’t be here either. This excludes events such as simulation shutdown and breakdown of a metastable vacuum state we are in.

Not the end: Humans could be destroyed without the causal path to space colonization being destroyed. Also much of human value could be destroyed without humans being destroyed. e.g. Super-intelligent AI would presumably be better at colonizing the stars than humans are. The same goes for transcending uploads. Repressive totalitarian states and long term erosion of value could destroy a lot of human value and still lead to interstellar colonization.

Since these risks are not filters, neither the knowledge that there is a large minimum total filter nor the use of SIA increases their likelihood.  SSA still increases their likelihood for the usual Doomsday Argument reasons. I think the rest of the risks listed in Nick Bostrom’s paper can be filters. According to SIA averting these filter existential risks should be prioritized more highly relative to averting non-filter existential risks such as those in this post. So for instance AI is less of a concern relative to other existential risks than otherwise estimated. SSA’s implications are less clear – the destruction of everything in the future is a pretty favorable inclusion in a hypothesis under SSA with a broad reference class, but as always everything depends on the reference class.

Anthropic principles agree on bigger future filters

I finished my honours thesis, so this blog is back on. The thesis is downloadable here and also from the blue box in the lower right sidebar. I’ll blog some other interesting bits soon.

My main point was that two popular anthropic reasoning principles, the Self Indication Assumption (SIA) and the Self Sampling Assumption (SSA), as well as Full Non-indexical Conditioning (FNC)  basically agree that future filter steps will be larger than we otherwise think, including the many future filter steps that are existential risks.

Figure 1: SIA likes possible worlds with big populations at our stage, which means small past filters, which means big future filters.

SIA says the probability of being in a possible world is proportional to the number of people it contains who you could be. SSA says it’s proportional to the fraction of people (or some other reference class) it contains who you could be. FNC says the probability of being in a possible world is proportional to the chance of anyone in that world having exactly your experiences. That chance is more the larger the population of people like you in relevant ways, so FNC generally gets similar answers to SIA. For a lengthier account of all these, see here.

SIA increases expectations of larger future filter steps because it favours smaller past filter steps. Since there is a minimum total filter size, this means it favours big future steps. This I have explained before. See Figure 1. Radford Neal has demonstrated similar results with FNC.

Figure 2: A larger filter between future stages in our reference class makes the population at our own stage a larger proportion of the total population. This increases the probability under SSA.

SSA can give a variety of results according to reference class choice. Generally it directly increases expectations of both larger future filter steps and smaller past filter steps, but only for those steps between stages of development that are at least partially included in the reference class.

For instance if the reference class includes all human-like things, perhaps it stretches from ourselves to very similar future people who have avoided many existential risks. In this case, SSA increases the chances of large filter steps between these stages, but says little about filter steps before us, or after the future people in our reference class. This is basically the Doomsday Argument – larger filters in our future mean fewer future people relative to us. See Figure 2.

Figure 3: In the world with the larger early filter, the population at many stages including ours is smaller relative to some early stages. This makes the population at our stage a smaller proportion of the whole, which makes that world less likely. (The populations at each stage are a function of the population per relevant solar system as well as the chance of a solar system reaching that stage, which is not illustrated here).

With a reference class that stretches to creatures in filter stages back before us, SSA increases the chances of smaller past filters steps between those stages. This is because those filters make observers at almost all stages of development (including ours) less plentiful relative to at least one earlier stage of creatures in our reference class. This makes those at our own stage a smaller proportion of the population of the reference class. See Figure 3.

The predictions of the different principles differ in details such as the extent of the probability shift and the effect of timing. However it is not necessary to resolve anthropic disagreement to believe we have underestimated the chances of larger filters in our future. As long as we think something like one of the above three principles is likely to be correct, we should update our expectations already.