Tag Archives: Selection effect

Person moments make sense of anthropics

Often people think that various forms of anthropic reasoning require you to change your beliefs in ways other than conditionalizing on evidence. This is false, at least in the cases I know of. I shall talk about Frank Arntzenius‘ paper Some Problems for Conditionalization and Reflection [gated] because it explains the issue well, though I believe his current views agree with mine.

He presents five thought experiments: Two Roads to Shangri La, The Prisoner, John Collins’s Prisoner, Sleeping Beauty and Duplication. In each of them, it seems the (arguably) correct answer violates van Fraassen’s reflection principle, which basically says that if you expect to believe something in the future without having been e.g. hit over the head between now and then, you should believe it now. For instance the thirder position in Sleeping Beauty seems to violate this principle because before the experiment Beauty believes there is a fifty percent chance of heads, and that when she wakes up she will think there is a thirty three percent chance. Arntzenius argued that these seemingly correct answers really are the correct ones, and claimed that they violate the reflection principle because credences can evolve in two ways other than by conditionalization.

First he said credences can shift, for instance through time. I know that tomorrow I will have a higher credence in it being Monday than I do today, and yet it would not be rational for me to increase my credence in it being Monday now on this basis. They can also ‘spread out’. For instance if you know you are in Fairfax today, and that tomorrow a perfect replica of your brain experiencing Fairfax will be made and placed in a vat in Canberra, tomorrow your credence will go from being concentrated in Fairfax to being spread between there and Canberra. This is despite no damage having been done to your own brain. As Arntzenius pointed out, such an evolution of credence looks like quite the opposite of conditionalization, since conditionalization consists of striking out possibilities that your information excludes – it never opens up new possibilities.

I agree that beliefs should evolve in these two ways. However they are both really conditionalization, just obscured. They make sense as conditionalization when you think of them as carried out by different momentary agents, based on the information they infer from their connections to other momentary agents with certain beliefs (e.g. an immediately past self).

Normal cases can be considered this way quite easily. Knowing that you are the momentary agent that followed a few seconds after an agent who knew a certain set of facts about the objective world, and who is (you assume) completely trustworthy, means you can simply update the same prior with those same facts and come to the same conclusion. That is, you don’t really have to do anything. You can treat a stream of moments as a single agent. This is what we usually do.

However sometimes being connected in a certain way to another agent does not make everything that is true for them true for you. Most obviously, if they are a past self and know it is 12 o clock, your connection via being their one second later self means you should exclude worlds where you are not at time 12:00:01. You have still learned from your known relationship to that agent and conditionalized, but you have not learned that what is true of them is true of you, because it isn’t. This is the first way Arntzenius mentioned that credences seem to evolve through time not by by conditionalization.

The second way occurs when one person-moment is at location X, and another person moment has a certain connection to the person at X, but there is more than one possible connection of that sort. For instance when two later people both remember being an earlier person because the earlier person was replicated in some futuristic fashion. Then while the earlier person moment could condition on their exact location, the later one must condition on being in one of several locations connected that way to the earlier person’s location, so their credence spreads over more possibilities than that of the earlier self. If you call one of these later momentary agents the same person as the earlier one, and say they are conditionalizing, it seems they are doing it wrong. But considered as three different momentary people learning from their connections they are just conditionalizing as usual.

What exactly the later momentary people should believe is a matter of debate, but I think that can be framed entirely as a question of what their state spaces and priors look like.

Momentary humans almost always pass lots of information from one to the next, chronologically along chains of memory through non-duplicated people, knowing their approximate distance from one another. So most of the time they can treat themselves as single units who just have to update on any information coming from outside, as I explained. But conditionalization is not specific to these particular biological constructions; and when it is applied to information gained through other connections between agents, the resulting time series of beliefs within one human will end up looking different to that in a chain with no unusual extra connections.

This view also suggests that having cognitive defects, such as memory loss, should not excuse anyone from having credences, as for instance Arntzenius argued it should in his paper Reflections on Sleeping Beauty: “in the face of forced irrational changes in one’s degrees of belief one might do best simply to jettison them altogether”. There is nothing special about credences derived from beliefs of a past agent you identify with. They are just another source of information. If the connection to other momentary agents is different to usual, for instance through forced memory loss, update on it as usual.

I am anti-awareness and you should be too

People seem to like raising awareness a lot. One might suspect too much, assuming the purpose is to efficiently solve whatever problem the awareness is being raised about. It’s hard to tell whether it is too much by working out how much is the right amount then checking if it matches what people do. But a feasible heuristic approach is to consider factors that might bias people one way or the other, relative to what is optimal.

Christian Lander at Stuff White People Like suggests some reasons raising awareness should be an inefficiently popular solution to other people’s problems:

This belief [that raising awareness will solve everything] allows them to feel that sweet self-satisfaction without actually having to solve anything or face any difficult challenges…

What makes this even more appealing for white people is that you can raise “awareness” through expensive dinners, parties, marathons, selling t-shirts, fashion shows, concerts, eating at restaurants and bracelets.  In other words, white people just have to keep doing stuff they like, EXCEPT now they can feel better about making a difference…

So to summarize – you get all the benefits of helping (self satisfaction, telling other people) but no need for difficult decisions or the ensuing criticism (how do you criticize awareness?)…

He seems to suspect that people are not trying to solve problems, but I shan’t argue about that here. At least some people think that they are trying to effectively campaign; this post is concerned with biases they might face. Christian  may or may not demonstrate a bias for these people. All things equal, it is better to solve problems in easy, fun, safe ways. However if it is easier to overestimate the effectiveness of easy, fun, safe things,  we probably raise awareness too much. I suspect this is true. I will add three more reasons to expect awareness to be over-raised.

First, people tend to identify with their moral concerns. People identify with moral concerns much more than they do with their personal, practical concerns for instance. Those who think the environment is being removed too fast are proudly environmentalists while those who think the bushes on their property are withering too fast do not bother to advertise themselves with any particular term, even if they spend much more time trying to correct the problem. It’s not part of their identity.

People like others to know about their identities. And raising awareness is perfect for this. Continually incorporating one’s concern about foreign forestry practices into conversations can be awkward, effortful and embarrassing. Raising awareness displays your identity even more prominently, while making this an unintended side effect of costly altruism for the cause rather than purposeful self advertisement.

That raising awareness is driven in part by desire to identify is evidenced by the fact that while ‘preaching to the converted’ is the epitome of verbal uselessness, it is still a favorite activity for those raising awareness, for instance at rallies, dinners and lectures. Wanting to raise awareness to people who are already well aware suggests that the information you hope to transmit is not about the worthiness of the cause. What else new could you be showing them? An obvious answer is that they learn who else is with the cause. Which is some information about the worthiness of the cause, but has other reasons for being presented. Robin Hanson has pointed out that breast cancer awareness campaign strategy relies on everyone already knowing about not just breast cancer but about the campaign. He similarly concluded that the aim is probably to show a political affiliation.

These are some items given away to promote Bre...

Image via Wikipedia

In many cases of identifying with a group to oppose some foe, it is useful for the group if you often declare your identity proudly and commit yourself to the group. If we are too keen to raise awareness about our identites, perhaps we are just used to those cases, and treat breast cancer like any other enemy who might be scared off by assembling a large and loyal army who don’t like it. I don’t know. But for whatever reason, I think our enthusiasm for increased awareness of everything is given a strong push by our enthusiasm for visible identifying with moral causes.

Secondly and relatedly, moral issues arouse a  person’s drive to determine who is good and who is bad, and to blame the bad ones. This urge to judge and blame should  for instance increase the salience of everyone around you eating meat if you are a vegetarian. This is at the expense of giving attention to any of the larger scale features of the world which contribute to how much meat people eat and how good or bad this is for animals. Rather than finding a particularly good way to solve the problem of too many animals suffering, you could easily be sidetracked by fact that your friends are being evil. Raising awareness seems like a pretty good solution if the glaring problem is that everyone around you is committing horrible sins, perhaps inadvertently.

Lastly, raising awareness is specifically designed to be visible, so it is intrinsically especially likely to spread among creatures who copy one another. If I am concerned about climate change, possible actions that will come to mind will be those I have seen others do. I have seen in great detail how people march in the streets or have stalls or stickers or tell their friends. I have little idea how people develop more efficient technologies or orchestrate less publicly visible political influence, or even how they change the insulation in their houses. This doesn’t necessarily mean that there is too much awareness raising; it is less effort to do things you already know how to do, so it is better to do them, all things equal. However too much awareness raising will happen if we don’t account for there being a big selection effect other than effectiveness in which solutions we will know about, and expend a bit more effort finding much more effective solutions accordingly.

So there are my reasons to expect too much awareness is raised. It’s easy and fun, it lets you advertise your identity, it’s the obvious thing to do when you are struck by the badness of those around you, and it is the obvious thing to do full stop. Are there any opposing reasons people would tend to be biased against raising awareness? If not, perhaps I should reconsider stopping telling you about this problem and finding a more effective way to lower awareness instead.

Sleeping Beauty should remain pure

Sleeping Beauty

Image via Wikipedia

Consider the Sleeping Beauty Problem. Sleeping Beauty is put to sleep on Sunday night. A coin is tossed. If it lands heads, she is awoken once on Monday, then sleeps until the end of the experiment. If it lands tails, she is woken once on Monday, drugged to remove her memory of this event, then awoken once on Tuesday, before sleeping till the end of the experiment. The awakenings during the experiment are indistinguishable to Beauty, so when she awakens she doesn’t know what day it is or how the coin fell. The question is this: when Beauty wakes up on one of these occasions, how confident should she be that heads came up?

There are two popular answers, 1/2 and 1/3. However virtually everyone agrees that if Sleeping Beauty should learn that it is Monday, her credence in Tails should be reduced by half, from whatever it was initially. So ‘Halfers’ come to think heads has a 2/3 chance, and ‘Thirders’ come to think they heads is as likely as tails. This is the standard Bayesian way to update, and is pretty uncontroversial.

Now consider a variation on the Sleeping Beauty Problem where Sleeping Beauty will be woken up one million times on  tails  and  only once on heads.  Again, the  probability  you  initially  put  on  heads  is  determined  by  the reasoning  principle  you  use,  but  the  probability  shift  if you are to  learn  that  you are in the first awakening will be the same either way. You will have to shift your odds by a million to one toward heads. Nick Bostrom points out that in this scenario, either before or after  this  shift you will have  to be extremely certain either of heads or of tails, and that such extreme certainty seems intuitively unjustified, either before or after knowing you are experiencing the first wakening.

Extreme Sleeping Beauty wakes up a million times on tails or once on heads. There is no choice of 'a' which doesn't lead to extreme certainty either before or after knowing she is at her first waking.

However the only alternative to this certainty is for Sleeping Beauty to  keep  odds  near  1:1 both  before  and  after  she learns she is at her first waking. This entails apparently giving up Bayesian conditionalization. Having excluded 99.9999% of the situations she may have been in where tails would have come up, Sleeping Beauty retains her previous credence in tails.

This is what Nick proposes doing however: his ‘hybrid model’ of  Sleeping  Beauty.  He  argues  that  this  does  not  violate  Bayesian conditionalization in cases such as this because Sleeping Beauty is in different indexical positions before and after knowing that she is at her first waking, so her observer-moments (thick time-slices of a person) at the different times need not agree.

I disagree, as I shall explain. Briefly, the disagreement between different observer-moments should not occur and is deeper than it first seems, the existing arguments against so called non-indexical conditioning also fall against the hybrid model, and Nick fails in his effort to show that Beauty won’t predictably lose money gambling.

Is hybrid Beauty Bayesian?

Nick argues first that a Bayesian may accept having 50:50 credences both before and after knowing that it is Monday, then claims that one should do so, given the absurdities of the Extreme Sleeping Beauty problem above and variants of it. His argument for the first part is as follows (or see p10). There are actually five rather than three relevant indexical positions in the Sleeping Beauty Problem. The extra two are Sleeping Beauty after she knows it is Monday under both heads and tails. He explains that it is the ignorant Beauties who should think the chance of Heads is half, and the informed Mondayers who should think the chance of Heads is still half conditional on it being Monday. Since these are observer-moments in different locations, he claims there is no inconsistency, and Bayesian conditionalization is upheld (presumably meaning that each observer-moment has a self-consistent set of beliefs).

He generalizes that one need not believe P(X) = A, just because one used to think P(X|E) = A and one just learned E. For that to be true the probability of X given that you don’t know E but will learn it would have to be equal to the probability of X given that you do know E but previously did not. Basically, conditional probabilities must not suddenly change just as you learn the conditions hold.

Why exactly a conditional probability might do this is left to the reader’s imagination. In this case Nick infers that it must have happened somehow, as no apparently consistent set of beliefs will save us from making strong updates in the Extreme Sleeping Beauty case and variations on it.

If receiving new evidence gives one leave to break consistency with any previous beliefs on grounds that ones conditional credences may have  changed with ones location, there would be little left of Bayesian conditioning in practice. Normal Bayesian conditioning is remarkably successful then, if we are to learn that a huge range of other inferences were equally well supported in any case of its use.

Nick’s calling Beauty’s unchanging belief in even odds consistent for a Bayesian is not because these beliefs meet some sort of Bayesian constraint, but because he is assuming there are not constraints on the relationship between the beliefs of different Bayesian observer-moments. By this reasoning, any set of internally consistent belief sets can be ‘Bayesian’. In the present case we chose our beliefs by a powerful disinclination toward making certain updates. We should admit it is this intuition driving our probability assignments then, and not call it a variant of Bayesianism. And once we have stopped calling it Bayesianism, we must ask if the intuitions that motivate it really have the force behind them that the intuitions supporting Bayesianism in temporally extended people do.

Should observer-moments disagree?

Nick’s argument works by distinguishing every part of Beauty with different information as a different observer. This is used to allow them to safely hold inconsistent beliefs with one another. So this argument is defeated if Bayesians should agree with one another, when they know one anothers’ posteriors, share priors and know one another to be rational. Aumann‘s agreement theorem does indeed show this. There is a slight complication in that the disagreement is over probabilities conditional on different locations, but the locations are related in a known way, so it appears they can be converted to disagreement over the same question. For instance past Beauty has a belief about the probability of heads conditional on her being followed by a Beauty who knows it is Monday, and Future Beauty has a belief conditional on the Beauty in her past being followed by one who knows it is Monday (which she now knows it was).

Intuitively, there is still only one truth, and consistency is a tool for approaching it. Dividing people into a lot of disagreeing parts so that they are consistent by some definition is like paying someone to walk your pedometer in order to get fit.

Consider the disagreement between observer-moments in more detail. For  instance,  suppose  before  Sleeping  Beauty  knows what  day  it  is  she assigns  50  percent  probability  to  heads  having  landed.  Suppose  she  then  learns  that  it  is Monday, and still believes she has a 50 percent chance of heads. Lets call the ignorant observer-moment Amy and the later moment who knows it is Monday Betty.

Amy and Betty do not merely come to different conclusions with different indexical  information. Betty believes Amy was wrong, given only the information Amy had. Amy thought that conditional on being followed by an observer-moment who knew it was Monday, the chances of Heads were 2/3. Betty knows this, and knows nothing else except that Amy was indeed followed by an observer-moment who knows it is Monday, yet believes the chances of heads are in fact half. Betty agrees with the reasoning principle Amy used. She also agrees with Amy’s priors. She agrees that were she in Amy’s position, she would have the same beliefs Amy has. Betty also knows that though her location in the world  has changed, she is in the same objective world as Amy – either Heads or Tails came up for both of them. Yet Betty must knowingly disagree with Amy about how likely that  world is to be one where Heads landed. Neither Betty nor Amy can argue that her belief about their shared world is more likely to be correct than the other’s. If this principle is even a step in the right direction then, these observer-moments could do better by aggregating their apparently messy estimates of reality.

Identity with other unlikely anthropic principles

Though I don’t think Nick mentions it, the hybrid model reasoning is structurally identical to SSSA using the reference class of ‘people with exactly one’s current experience’, both before and after  receiving  evidence  (different  reference  classes  in  each  case since they have different information). In both cases every member of Sleeping Beauty’s reference class shares the same experience. This means the proportion of her reference class who share her current experiences is always one. This allows Sleeping Beauty to stick with the fifty percent chance given by  the coin, both before and after knowing she is in her first waking, without any interference from changing potential locations.

SSSA with such a narrow reference class is exactly analogous to non-indexical conditioning, where ‘I observe X’ is interpreted as ‘X is observed by someone in the world’. Under both, possible worlds where your experience occurs nowhere are excluded and all other worlds retain their prior probablities, normalized. Nick has criticised non-indexical conditioning because it leads to an inability to update on most evidence, thus prohibiting science for instance. Since most people are quite confident that it is possible to do science, they are implicitly confident that non-indexical conditioning is well off the mark. This implies that SSSA using the narrowest reference class is just as implausible, except that it may be more readily traded for SSSA with other reference classes when it gives unwanted results. Nick has suggested SSA should be used with a broader reference class for this reason (e.g. see Anthropic Bias p181), though he also supports using different reference classes at different times.

These reasoning principles are more appealing in the Extreme Sleeping Beauty case, because our intuition there is to not update on evidence. However if we pick different principles for different circumstances according to which conclusions suit us, we aren’t using those principles, we are using our intuitions. There isn’t necessarily anything inherently wrong with using intuitions, but when there are reasoning principles available that have been supported by a mesh of intuitively correct reasoning and experience, a single untested intuition would seem to need some very strong backing to compete.

Beauty will be terrible at gambling

It first seems that Hybrid Beauty can be Dutch-Booked (offered a collection of bets she would accept and which would lead to certain loss for her), which suggests she is being irrational. Nick gives an example:

Upon awakening, on both Monday and Tuesday,
before either knows what day it is, the bookie offers Beauty the following bet:

Beauty gets $10 if HEADS and MONDAY.
Beauty pays $20 if TAILS and MONDAY.
(If TUESDAY, then no money changes hands.)

On Monday, after both the bookie and Beauty have been informed that it is
Monday, the bookie offers Beauty a further bet:

Beauty gets $15 if TAILS.
Beauty pays $15 if HEADS.

If Beauty accepts these bets, she will emerge $5 poorer.

Nick argues that Sleeping Beauty should not accept the first bet, because the bet will have to be made twice if tails comes up and only once if heads does, so that Sleeping Beauty isn’t informed about which waking she is in by whether she is offered a bet. It is known that when a bet on A vs. B will be made more times conditional on A than conditional on B, it can be irrational to bet according to the odds you assign to A vs. B. Nick illustrates:

…suppose you assign credence 9/10 to the proposition that the trillionth digit in the decimal expansion of π is some number other than 7. A man from the city wants to bet against you: he says he has a gut feeling that the digit is number 7, and he offers you even odds – a dollar for a dollar. Seems fine, but there is a catch: if the digit is number 7, then you will have to repeat exactly the same bet with him one hundred times; otherwise there will just be one bet. If this proviso is specified in the contract, the real bet that is being offered you is one where you get $1 if the digit is not 7 and you lose $100 if it is 7.

However  in these cases the problem stems from the bet being paid out many times under one circumstance. Making extra bets that will never be paid out cannot affect the value of a set of bets. Imagine the aforementioned city man offered his deal, but added that all the bets other than your first one would be called off once you had made your first one. You would be in the same situation as if the bet had not included his catch to begin with. It would be an ordinary bet, and you should be willing to bet at the obvious odds. The same goes for Sleeping Beauty.

We can see this more generally. Suppose E(x) is the expected value of x, P(Si) is probability of situation i arising, and V(i) is the value to you if it arises. A bet consists of a set of gains or losses to you assigned to situations that may arise.

E(bet) = P(S1)*V(S1) + P(S2)*V(S2) + …

The City Man’s offered bet is bad because it has a large number of terms with negative value and relatively high probability, since they occur together rather than being mutually exclusive in the usual fashion. It is a trick because it is presented at first as if there were only one term with negative value.

Where bets will be written off in certain situations, V(Si) is zero in the terms corresponding to those situations, so the whole terms are also zero, and may as well not exist. This means the first bet Sleeping Beauty is offered in her Dutch-booking test should be made at the same odds as if she would only bet once on either coin outcome. Thus she should take the bet, and will be Dutch booked.

Conclusion

In sum, Nick’s hybrid model is not a new kind of Bayesian updating, but use of a supposed loophole where Bayesianism is supposed to have few requirements. There doesn’t even seem to be a loophole there however, and if there were it would be a huge impediment to most practical uses of updating. Reasoning principles which are arguably identical to the hybrid model in the relevant ways have been previously discarded by most due to their obstruction of science among other things.  Last, Sleeping Beauty really will lose bets if she adopts the hybrid model and is otherwise sensible.

You might be population too

I recently attended a dinner forum on what size the population should be. All of the speakers held the same position: small. The only upsides of population mentioned were to horrid profit seeking people like property developers. Yet the downsides to population are horrendous – all our resource use problems multiplied! As one speaker quoted “The population can’t increase forever, and as a no brainer it should stop sooner rather than later”. As there are no respectable positives in the equation, no need for complicated maths. Smaller is better.

I suggested to my table what I saw as an obvious omission in this model: I at least am enjoying the population being big enough to have me in it, so I would at least consider putting a big positive value on human lives. My table seemed to think this an outlandish philosophical position. I suggested that if resource use is the problem, we fix externalities there, but they thought this just as roundabout a way of getting ‘sustainability’, whereas cutting the population seems straightforward and there’s nothing to lose by it. I suggested to the organizer that the positive of human existence deserved a mention (in a multiple hour forum), and he explained that if we didn’t exist we wouldn’t notice, as though that settles it.

But the plot thickened further. Why do you suppose we should keep the population low? “We should leave the world in as good or a better condition as we got it in” one speaker explained. So out of concern for future generations apparently. Future people don’t benefit from being alive, but it’s imperative that we ensure they have cheap water bills long before they have any such preferences. Continue reading

Anthropic summary

I mean to write about anthropic reasoning more in future, so I offer you a quick introduction to a couple of anthropic reasoning principles. There’s also a link to it in ‘pages’ in the side bar. I’ll update it later – there are arguments I haven’t written up yet, plus I’m in the middle of reading the literature, so hope to come across more good ones there.

SIA on other minds

Another interesting implication if the self indication assumption (SIA) is right is that solipsism is much less likely correct than you previously thought, and relatedly the problem of other minds is less problematic.

Solipsists think they are unjustified in believing in a world external to their minds, as one only ever knows one’s own mind and there is no obvious reason the patterns in it should be driven by something else (curiously, holding such a position does not entirely dissuade people from trying to convince others of it). This can then be debated on grounds of whether a single mind imagining the world is more or less complex than a world causing such a mind to imagine a world.

The problem of other minds is that even if you believe in the outside world that you can see, you can’t see other minds. Most of the evidence for them is by analogy to yourself, which is only one ambiguous data point (should I infer that all humans are probably conscious? All things? All girls? All rooms at night time?).

SIA says many minds are more likely than one, given that you exist. Imagine you are wondering whether this is World 1, with a single mind among billions of zombies, or World 2, with billions of conscious minds. If you start off roughly uncertain, updating on your own conscious existence with SIA shifts the probability of world 2 to billions of times the probability of world 1.

Similarly for solipsism. Other minds probably exist. From this you may conclude the world around them does too, or just that your vat isn’t the only one.

SIA doomsday: The filter is ahead

The great filter, as described by Robin Hanson:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?

I will argue that we are not far along at all. Even if the steps of the filter we have already passed look about as hard as those ahead of us, most of the filter is probably ahead. Our bright future is an illusion; we await filtering. This is the implication of applying the self indication assumption (SIA) to the great filter scenario, so before I explain the argument, let me briefly explain SIA.

SIA says that if you are wondering which world you are in, rather than just wondering which world exists, you should update on your own existence by weighting possible worlds as more likely the more observers they contain. For instance if you were born of an experiment where the flip of a fair coin determined whether one (tails) or two (heads) people were created, and all you know is that and that you exist, SIA says heads was twice as likely as tails. This is contentious; many people think in such a situation you should think heads and tails equally likely. A popular result of SIA is that it perfectly protects us from the doomsday argument. So now I’ll show you we are doomed anyway with SIA.

Consider the diagrams below. The first one is just an example with one possible world so you can see clearly what all the boxes mean in the second diagram which compares worlds. In a possible world there are three planets and three stages of life. Each planet starts at the bottom and moves up, usually until it reaches the filter. This is where most of the planets become dead, signified by grey boxes. In the example diagram the filter is after our stage. The small number of planets and stages and the concentration of the filter is for simplicity; in reality the filter needn’t be only one unlikely step, and there are many planets and many phases of existence between dead matter and galaxy colonizing civilization. None of these things are important to this argument.

.

Diagram key

.

The second diagram shows three possible worlds where the filter is in different places. In every case one planet reaches the last stage in this model – this is to signify a small chance of reaching the last step, because we don’t see anyone out there, but have no reason to think it impossible. In the diagram, we are in the middle stage, earthbound technological civilization say. Assume the various places we think the filter could be are equally likely..

SIA doom

 

.

This is how to reason about your location using SIA:

  1. The three worlds begin equally likely.
  2. Update on your own existence using SIA by multiplying the likelihood of worlds by their their population. Now the likelihood ratio of the worlds is 3:5:7
  3. Update on knowing you are in the middle stage. New likelihood ratio: 1:1:3. Of course if we began with an accurate number of planets in each possible world, the 3 would be humungous and we would be much more likely in an unfiltered world.

Therefore we are much more likely to be in worlds where the filter is ahead than behind.

—-

Added: I wrote a thesis on this too.


A status theory of blog commentary

Commentary on blogs usually comes in two forms: comments there and posts on other blogs. In my experience, comments tend to disagree and to be negative or insulting much more than links from other blogs are. In a rough count of comments and posts taking a definite position on this blog, 25 of 35 comments disagreed, while 1 of 12 posts did, even if you don’t count another 11 posts which link without comment, a seemingly approving act. Why is this?

Here’s a theory. Lets say you want status. You can get status by affiliating with the right others. You can also get status within an existing relationship by demonstrating yourself to be better than others in it. When you have a choice of who to affiliate with, you will do better not to affiliate at all with most of the people you could demonstrate your superiority to in a direct engagement, so you mostly try to affiliate with higher status people and ignore or mock from a distance those below you. However when it is already given that you affiliate with someone, you can gain status by seeming better than they.

These things are supported if there is more status conflict in less voluntary relationships than in voluntary ones, which seems correct. Compare less voluntary relationships in workplaces, schoolgrounds, families, and between people and employees of organizations they must deal with (such as welfare offices) with more voluntary relationships such as friendships, romantic relationships, voluntary trade, and acquaintanceships.

This theory would explain the pattern of blog commentary. Other bloggers are choosing whether to affiliate with your blog, visibly to outside readers. As in the rest of life, the blogger would prefer to be seen as up with good bloggers and winning stories than to be bickering with bad bloggers, who are easy to come by. So bloggers mostly link to good blogs or posts and don’t comment on bad ones.

Commenters are visible only to others in that particular comments section. Nobody else there will be impressed or interested to observe that you read this blogger or story, as they all are. So the choice of whether to affiliate doesn’t matter, and all the fun is in showing superiority within that realm. Pointing out that the blogger is wrong shows you are smarter than they, while agreeing says nothing. So commenters tend to criticize where they can and not bother commenting on posts they agree with.

Note that this wouldn’t mean opinions are shaped by status desire, but that there are selection effects so that bloggers don’t publicize their criticisms and commenters don’t publicize what they like.

Generous people cross the street before the beggar

Robert Wiblin points to a study showing that the most generous people are the most keen to avoid situations where they will be generous, even though the people they would have helped will go without.

We conduct an experiment to demonstrate the importance of sorting in the context of social preferences. When individuals are constrained to play a dictator game, 74% of the subjects share. But when subjects are allowed to avoid the situation altogether, less than one third share. This reversal of proportions illustrates that the influence of sorting limits the generalizability of experimental findings that do not allow sorting. Moreover, institutions designed to entice pro-social behavior may induce adverse selection. We find that increased payoffs prevent foremost those subjects from opting out who share the least initially. Thus the impact of social preferences remains much lower than in a mandatory dictator game, even if sharing is subsidized by higher payoffs…

A big example of generosity inducing institutions causing adverse selection is market transactions with poor people.

For some reason we hold those who trade with another party responsible for that party’s welfare. We blame a company for not providing its workers with more, but don’t blame other companies for lack of charity to the same workers. This means that you can avoid responsibility to be generous by not trading with poor people.

Many consumers feel that if they are going to trade with poor people they should buy fair trade or thoroughly research the supplier’s niceness. However they don’t have the money or time for those, so instead just avoid buying from poor people. Only the less ethical remain to contribute to the purses of the poor.

Probably the kindest girl in my high school said to me once that she didn’t want a job where she would get rich because there are so many poor people in the world. I said that she should be rich and give the money to the poor people then. Nobody was wowed by this idea. I suspect something similar happens often with people making business and employment decisions. Those who have qualms about a line of business such as trade with poor people tend not to go into that, but opt for something guilt free already, while the less concerned do the jobs where compassion might help.

Charitable explanation

Is anyone really altruistic? The usual cynical explanations for seemingly altruistic behavior are that it makes one feel good, it makes one look good, and it brings other rewards later. These factors are usually present, but how much do they contribute to motivation?

One way to tell if it’s all about altruism is to invite charity that explicitly won’t benefit anyone. Curious economists asked their guinea pigs for donations to a variety of causes, warning them:

“The amount contributed by the proctor to your selected charity WILL be reduced by however much you pass to your selected charity. Your selected charity will receive neither more nor less than $10.”

Many participants chipped in nonetheless:

We find that participants, on average, donated 20% of their endowments and that approximately 57% of the participants made a donation.

This is compared to giving an average of 30-49% in experiments where donating benefited the cause, but it is of course possible that knowing you are helping offers more of a warm glow. It looks like at least half of giving isn’t altruistic at all, unless the participants were interested in the wellbeing of the experimenters’ funds.

The opportunity to be observed by others also influences how much we donate, and we are duly rewarded with reputation:

Here we demonstrate that more subjects were willing to give assistance to unfamiliar people in need if they could make their charity offers in the presence of their group mates than in a situation where the offers remained concealed from others. In return, those who were willing to participate in a particular charitable activity received significantly higher scores than others on scales measuring sympathy and trustworthiness.

This doesn’t tell us whether real altruism exists though. Maybe there are just a few truly altruistic deeds out there? What would a credibly altruistic act look like?

Fortunately for cute children desirous of socially admirable help, much charity is not driven by altruism (picture: Laura Lartigue)

Fortunately for cute children desirous of socially admirable help, much charity is not driven by altruism (picture: Laura Lartigue)

If an act made the doer feel bad, look bad to others, and endure material cost, while helping someone else, we would probably be satisfied that it was altruistic. For instance if a person killed their much loved grandmother to steal her money to donate to a charity they believed would increase the birth rate somewhere far away, at much risk to themselves, it would seem to escape the usual criticisms. And there is no way you would want to be friends with them.

So why would anyone tell you if they had good evidence they had been altruistic? The more credible evidence should look particularly bad. And if they were keen to tell you about it anyway, you would have to wonder whether it was for show after all. This makes it hard for an altruist to credibly inform anyone that they were altruistic. On the other hand the non-altruistic should be looking for any excuse to publicize their good deeds. This means the good deeds you hear about should be very biased toward the non-altruistic. Even if altruism were all over the place it should be hard to find. But it’s not, is it?