Tag Archives: philosophy

Suspicious arguments regarding cow counting

People sometimes think that the doomsday argument is implausible because it always says we are more likely to die out sooner than our other reasoning suggests, regardless of the situation. There’s something dubious about an argument that has the same conclusion about the world regardless of any evidence about it. Nick Bostrom paraphrases, “But isn’t the probability that I will have any given rank always lower the more persons there will have been? I must be unusual in some respects, and any particular rank number would be highly improbable; but surely that cannot be used as an argument to show that there are probably only a few persons?” (he does not agree with this view).

That this reasoning is wrong is no new insight. Nick explains for instance that in any given comparison of different length futures, the doomsday reasoning doesn’t always give you the same outcome. You might have learned that your birth rank ruled out the shorter future. It remains the case though that the shift from whatever you currently believe to what the doomsday argument tells you to believe is always one toward shorter futures. I think it is this that seems fishy to people.

I maintain that the argument’s predictable conclusion is not a problem at all, and I would like to make this vivid.

Once a farmer owned a group of cows. He would diligently count them, to ensure none had escaped, and discover if there were any new calves. He would count them by lining them up and running his tape measure along the edge of the line.

“One thousand cows” he exclaimed one day. “Fifty new calves!”

His neighbour heard him from a nearby field, and asked what he was talking about. The farmer held out his tape measure. The incredulous neighbour explained that since cows are more than an inch long, his figures would need some recalculation. Since his cows were about five foot long on average, the neighbour guessed he would need to divide his number by 60. But the farmer quickly saw that this argument must be bogus. If his neighbour was right, whatever number of cows he had the argument would say he had fewer. What kind of argument would that be?

A similar one to the Doomsday Argument’s claim that the future should always be shorter than we otherwise think. In such cases the claim is that your usual method of dealing with evidence is biased, not that there is some particular uncommon evidence that you didn’t know about.

Similarly, the Self Indication Assumption‘s ‘bias’ toward larger worlds is taken as reason against it. Yet it is just a claim that our usual method is biased toward small worlds.

Is it obvious that pain is very important?

“Never, for any reason on earth, could you wish for an increase of pain. Of pain you could wish only one thing: that it should stop. Nothing in the world was so bad as physical pain. In the face of pain there are no heroes, no heroes [...].  –George Orwell, 1984 via Brian Tomasik , who seems to agree that just considering pain should be enough to tell you that it’s very important.

It seems quite a few people I know consider pain to have some kind of special status of badness, and that preventing it is thus much more important than I think it. I wouldn’t object, except that they apply this in their ethics, rather than just their preferences regarding themselves. For instance arguing that other people shouldn’t have children, because of the possibility of those children suffering pain. I think pain is less important to most people relative to their other values than such negative utilitarians and similar folk believe.

One such argument for the extreme importance of pain is something like ‘it’s obvious’. When you are in a lot of pain, nothing seems more important than stopping that pain. Hell, even when you are in a small amount of pain, mitigating it seems a high priority. When you are looking at something in extreme pain, nothing seems more important than stopping that pain. So pain is just obviously the most important bad thing there is. The feeling of wanting a boat and not having one just can’t compare to pain. The goodness of lying down at the end of a busy day is nothing next to the badness of even relatively small pains.

I hope I do this argument justice, as I don’t have a proper written example of it at hand.

An immediate counter is that when we are not in pain, or directly looking at things in pain, pain doesn’t seem so important. For instance, though many people in the thralls of a hangover consider it to be pretty bad, they are repeatedly willing to trade half a day of hangover for an evening of drunkenness. ‘Ah’, you may say, ‘that’s just evidence that life is bad – so bad that they are desperate to relieve themselves from the torment of their sober existences! So desperate that they can’t think of tomorrow!’. But people have been known to plan drinking events, and even to be in quite good spirits in anticipation of the whole thing.

It is implicit in the argument from ‘pain seems really bad close up’ that pain does not seem so bad from a distance. How then to know whether your near or far assessment is better?

You could say that up close is more accurate, because everything is more accurate with more detail. Yet since this is a comparison between different values, being up close to one relative to others should actually bias the judgement.

Perhaps up close is more accurate because at a distance we do our best not to think about pain, because it is the worst thing there is.

If you are like many people, when you are eating potato chips, you really want to eat more potato chips. Concern for your health, your figure, your experience of nausea all pale into nothing when faced with your drive to eat more potato chips. We don’t take that as good evidence that really deep down you want to eat a lot of potato chips, and you are just avoiding thinking about it all the rest of the time to stop yourself from going crazy. How is that different?

Are there other reasons to pay special attention to the importance of pain to people who are actually experiencing it?

Added: I think I have a very low pain threshold, and am in a lot of pain far more often than most people. I also have bad panic attacks from time to time, which I consider more unpleasant than any pain I have come across, and milder panic attacks frequently. So it’s not that I don’t know what I’m talking about. I agree that suffering comes with (or consists of) an intense urge to stop the suffering ASAP. I just don’t see that this means that I should submit to those urges the rest of the time. To the contrary! It’s bad enough to devote that much time to such obsessions. When I am not in pain I prefer to work on other goals I have, like writing interesting blog posts, rather than say trying to discover better painkillers. I am not willing to experiment with drugs that could help if I think they might interfere with my productivity in other ways. Is that wrong?

Reasons for Persons

Suppose you are replicated on Mars, and the copy of you on Earth is killed ten minutes later. Most people feel like there is some definite answer to whether the Martian is they or someone else. Not an answer got from merely defining ‘me’ to exclude alien clones or not, but some real me-ness which persists or doesn’t, even if they don’t know which. In Reasons and Persons, Derek Parfit argues that there is no such thing. Personal identity consists of physical facts such as how well I remember being a ten year old and how much my personality is similar to that girl’s. There is nothing more to say about whether we are the same person than things like this, plus pragmatic definitional judgements, such as that a label should only apply to one person at a given time. He claims that such continuity of memories and other psychological features is what matters to us, so as long as that continuity exists it shouldn’t matter whether we decide to call someone ‘me’ or ‘my clone’.

I agree with him for the most part. But he is claiming that most people are very wrong about something they are very familiar with. So the big question must be why everyone is so wrong, and why they feel so sure of it. I have had many a discussion where my conversational partner insists that if they were frozen and revived, or a perfect replica were made of them, or whatever, it would not be them. 

To be clear, what exactly is this fallacious notion of personal identity that people have?

  • - each human has one and only one, which lasts with them their entire life
  • - If you cease to have it you are dead, because you are it
  • - it doesn’t wax or wane, it can only be present or absent.
  • - it is undetectable (except arguably from the inside)
  • - two people can’t have the same one, even if they both split from the same previous person somehow.
  • - They are unique even if they have the same characteristics – if I were you and you were me, our identities would be the other way around from how they are, and that would be different from the present situation.

So basically, they are like unique labels for each human which label all parts of that human and distinguish it from all other humans. Except they are not labels, they are really there, characterising each creature as a particular person.

I suspect then the use of such a notion is a basic part of conducting social relationships. Suppose you want to have nuanced relationships, with things like reciprocation and threats and loyalty, with a large number of other monkeys. Then you should be interested in things like which monkey today is the one who remembers that you helped them yesterday, or which is the one who you have previously observed get angry easily.

This seems pretty obvious, but that’s because you are so well programmed to do it.There are actually a lot of more obvious surface characteristics you could pay attention to when categorising monkeys for the purpose of guessing how they will behave: where they are, whether they are smiling, eating, asleep. But these are pretty useless next to apparently insignificant details such as that they have large eyes and a hairier than average nose, which are important because they are signs of psychological continuity. So you have to learn to categorize monkeys, unlike other things, by tiny clues to some hidden continuity inside them. There is no need for us to think of ourselves as tracking anything complicated, like a complex arrangement of consistent behaviours that are useful to us, so we just think of what we care about in others as an invisible thing which is throughout a single person at all times and never in any other people.

The clues might differ over time. The clues that told you which monkey was Bruce ten years ago might be quite different from the ones that tell you that now. Yet you will do best to steadfastly believe in a continuing Bruceness inside all those creatures. Which is because even if he changes from an idealistic young monkey to a cynical old monkey, he still remembers that he is your friend, and all the nuances of your relationship, which is what you want keep track of. So you think of his identity as stretching through an entire life, and of not getting stronger or weaker according to his physical details.

One very simple heuristic for keeping track of these invisible things is that there is only ever one instantiation of each identity at a given time. If the monkey in the tree is Mavis, then the monkey on the ground isn’t. Even if they are identical twins, and you can’t tell them apart at all, the one you are friends with will behave differently to you than the one whose nuts you stole, so you’d better be sure to conceptualise them as different monkeys, even if they seem physically identical.

Parfit argues that what really matters – even if we don’t appreciate it because we are wrong about personal identity – is something like psychological or physical continuity. He favours psychological if I recall. However if the main point of this deeply held belief in personal identity is to keep track of relationships and behavioural patterns, that suggests that what really matters to us in that vicinity is more limited than psychological continuity. A lot of psychological continuity is irrelevant for tracking relationships. For instance if you change your tastes in food, or have a terrible memory for places, or change over many years from being reserved to being outgoing, people will not feel that you are losing who you are. However if you change your loyalties, or become unable to recognise your friends, or have fast unpredictable shifts in your behaviour I think people will.

Which is not to say I think you should care about these kinds of continuity when you decide whether an imperfect upload would still be you. I’m just hypothesising that these are the things that will make people feel like ‘what matters’ in personal identity has been maintained, should they stop thinking what matters is invisible temporal string. Of course what you should call yourself, for the purpose of caring disproportionately about it and protecting its life is a matter of choice, and I’m not sure any of these criteria is the best basis for it. Maybe you should just identify with everyone and avoid dying until the human race ends.

What to not know

I just read ‘A counterexample to the contrastive account of knowledge‘ by Jason Rourke, at the suggestion of John Danaher. I’ll paraphrase what he says before explaining why I’m not convinced. I don’t actually know much more about the topic, so maybe take my interpretation of a single paper with a grain of salt. Which is not to imply that I will tell you every time I don’t know much about a topic.

Traditionally ‘knowing’ has been thought of as a function of two things: the person who does the knowing, and the thing that they know. The ‘Contrastive Account of Knowledge’ (CAK) says that it’s really a function of three things – the knower, the thing they know, and the other possibilities that they have excluded.

For instance I know it is Monday if we take the other alternatives to be ‘that it is Tuesday and my computer is accurate on this subject’, etc. I have excluded all those possibilities just now by looking at my computer. However if alternatives such as that of it being Tuesday and my memory and computer saying it is Monday are under consideration, then I don’t know that it’s Monday. Whether I have the information to say P is true depends on what’s included in not-P.

So it seems to me CAK would be correct if there were no inherent set of alternatives to any given proposition, or if we often mean to claim that only some of these alternatives have been excluded when we say something is known. It would be wrong if knowing X didn’t rely on any consideration of the mutually exclusive alternatives, and unimportant if there were a single set of alternatives determined by the proposition whose truth is known, which is what people always mean to consider.

Rourke seems to be arguing that CAK is not like what we usually mean by knowledge. He seems to be doing this by claiming that knowing things need not involve consideration of the alternatives. He gives this example:

The Claret Case. Imagine that Holmes and Watson are investigating a crime that occurred during a meeting attended by Lestrade, Hopkins, LeVillard, and no others. The question Who drank claret? is under discussion. Watson announces ‘‘Holmes knows that Lestrade drank claret.’’ Given the question under discussion and the facts described, the alternative propositions that partially constitute the knowledge relation are Hopkins drank claret and LeVillard drank claret.

He then argues basically that Holmes can know that Lestrade drank claret without knowing that Hopkins and LeVillard didn’t drink claret, since all their claret drinking was independent. He thinks this contradicts CAK because he claims, using CAK,

 The logical form of Watson’s announcement, then, is Holmes knows that Lestrade drank claret rather than Hopkins drank claret or LeVillard drank claret.

Whereas we want to say that Holmes does know Lestrade drank claret, if for instance he sees Lestrade drinking claret, and he need not necessarily know anything about what Hopkins and LeVillard were up to.

Which prompts the question why Rourke thinks these other guys’ drinking are the alternatives to Lestrade drinking in the knowledge relation. The obvious real alternative to exclude is that Lestrade didn’t drink.

Rourke gets to something like this as a counterargument, and argues against it. He says that if ‘who drank claret?’ is interpreted as ‘work out whether or not each person drank claret’ then it can be divided up in this way into ‘Lestrade drank claret’ vs. ‘Lestrade did not drink claret’ combined with ‘Hopkins drank claret’ vs ‘Hopkins did not drink claret’ etc. However if the question is meant as something like ‘who is a single person who drank claret?’, then ‘knowing’ the answer to this question doesn’t require excluding all the alternative answers to this question, some of which may be true.

As far as I can tell, this seems troublesome because he supposes that the alternatives to the purported knowledge must be the various other possible answers to the question, if what you supposedly know is ‘the answer to the question’. The alternative answers to such a question can only be positive reports of different people drinking, or that nobody drank. The question doesn’t ask for any mentions of who didn’t drink. So what can we contrast ‘Lestrade drank’ with, if not ‘Lestrade didn’t drink’?

But why suppose that the alternatives must be  the other answers to the question? If ‘knowing who drank claret’ just means knowing that a certain answer to that question is true rather than false for instance, there seems no problem. For instance perhaps ‘I know who drank’ means that I know ‘Lestrade did’ is one answer to the question. This can happily be contrasted with ‘Lestrade did’ not being an answer for instance. Why not suppose ‘I know who drank claret’ is shorthand for something like that?

It seems that at least for any specific state of the world, it’s possible to think of knowing it in terms of excluding the alternatives. It also seems answering more difficutly worded questions such as the one above must still be based on knowledge about straightforward states of the world. So how could knowledge of at least one person who drank for instance not be understandable in terms of excluding alternatives?

What ‘believing’ usually is

Experimental Philosophy discusses the following experiment. Participants were told a story of Tim, whose wife is cheating on him. He gets a lot of evidence of this, but tells himself it isn’t so.

Participants given this case were then randomly assigned to receive one of the two following questions:

  • Does Tim know that Diane is cheating on him?
  • Does Tim believe that Diane is cheating on him?

Amazingly enough, participants were substantially more inclined to say yes to the question about knowledge than to the question about belief.

This idea that knowledge absolutely requires belief is sometimes held up as one of the last bulwarks of the idea that concepts can be understood in terms of necessary conditions, but now we seem to be getting at least some tentative evidence against it. I’d love to hear what people think.

I’m not surprised – often people say explicitly things like ‘I know X, but I really can’t believe it yet’. This seems uninteresting from the perspective of epistemology. ‘Believe’ in common usage just doesn’t mean the same as what it means in philosophy. Minds are big and complicated, and ‘believing’ is about what you sincerely endorse as the truth, not what seems likely given the information you have. Your ‘beliefs’ are probably related to your information, but also to your emotions and wishes and simplifying assumptions among other things. ‘Knowing’ on the other hand seems to be commonly understood as about your information state. Though not always – for instance ‘I should have known’ usually means ‘in my extreme uncertainty, I should have suspected enough to be wary’. At any rate, in common use knowing and believing are not directly related.

This is further evidence you should be wary of what people ‘believe’.

Leaving out the dead

She asked how uncle Freddie was doing. The past few days have been quite bad for him, I said. He was killed by a bus just over a month ago. The first few weeks nothing good happened that he would have missed, but he really would have liked it when the cousins visited. We are thinking about cancelling the wedding. He really would have wanted to be there and the deprivations are getting to be a bit much.

This is a quote from Ben Bradley via Clayton Littlejohn‘s blog. Commenters there agree that postponing the wedding will not help Freddie, but their suggestions about why seem quite implausible to me.

This is really no different to if Freddy was alive but couldn’t come to the wedding because he was busy. Would it be better for him if we cancel it entirely so he wouldn’t be able to come in any case? I hope it is clear enough here that the answer is no. His loss from failing to attend is the comparison between a world where he could attend and the real world. Changing to a different real world where he still can’t attend makes no difference to him in terms of deprivation. This doesn’t involve the controversial questions about how to treat non-existent people. But I think in all relevant ways it is just the same as dead Freddy’s problem.

The apparent trickiness or interestingness of the original problem seems to stem from thinking of Freddy’s loss as being some kind of suffering at some point in time in the real world, rather than a comparison between the real world and some counterfactual one. This prompts confusion because it seems strange to think he is suffering when he doesn’t exist, yet also strange to think that he doesn’t bear some cost from missing out on these things or from being dead.

But really there is no problem here because he is not suffering in the affective sense, the harm to him is just of missing out. It would indeed be strange if he suffered ill feelings, but failing to enjoy a good experience seems well within the capacity of a dead person. And as John Broome has elaborated before - while suffering happens at particular times, harms are comparisons between worlds, perhaps of whole lives, so don’t need to be associated with specific times. My failure to have experienced a first time bungie jumping can’t usefully be said to have occurred at any particular moment, yet it is quite clear that I have failed to experience it. You could say it happens at all moments, but one can really only expect a single first bungie jump, so I can’t claim to suffer from the aggregate loss of failing to experience it at every moment.

You might think of the failure as happening at different moments in different comparisons with worlds where I do bungie jump at different times. This is accurate in some sense, but there is just no need to bother differentiating all those worlds in order to work out if I have suffered that cost. And without trying to specify a time for the failure, you avoid any problems when asked if a person who dies before they would have bungie jumped missed out on bungie jumping. And it becomes easy to say that Freddy suffered a cost from missing the wedding, one that cannot be averted by everyone else missing it too.


P.S. If you wonder where I have been lately, the answer is mostly moving to Pittsburgh, via England. I’m at CMU now, and trying to focus more on philosophy topics (my course of study here). If you know of good philosophy blogs, please point them out to me. I am especially interested in ones about ideas, rather than conference dates and other news.

Hidden philosophical progress

Bertrand Russell:

If you ask a mathematician, a mineralogist, a historian, or any other man of learning, what definite body of truths has been ascertained by his science, his answer will last as long as you are willing to listen. But if you put the same question to a philosopher, he will, if he is candid, have to confess that his study has not achieved positive results such as have been achieved by other sciences…this is partly accounted for by the fact that, as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be called philosophy, and becomes a separate science. The whole study of the heavens, which now belongs to astronomy, was once included in philosophy; Newton’s great work was called ‘the mathematical principles of natural philosophy’. Similarly, the study of the human mind, which was a part of philosophy, has now been separated from philosophy and has become the science of psychology. Thus, to a great extent, the uncertainty of philosophy is more apparent than real: those questions which are already capable of definite answers are placed in the sciences, while those only to which, at present, no definite answer can be given, remain to form the residue which is called philosophy.

I often hear this selection effect explanation for the apparently small number of resolved problems that philosophy can boast. I don’t think it necessarily lessens this criticism of philosophy however. It matters whether the methods that were successful at  providing insights in what were to become fields like psychology and astronomy – those which brought definite answers within reach – were methods presently included in philosophy. If they were not, then the fact that the word ‘philosophy’ has come to apply to a smaller set of methods which haven’t been successful does not particularly suggest that such methods will become  successful in that way*. If they were the same methods, then that is more promising.

I don’t know which of these is the case. I also don’t actually know how many resolved problems philosophy has. If you do, feel free to tell me. I start a PhD in philosophy in the Autumn, and haven’t officially studied it before, so I am curious about its merits.

*Note that collecting resolved problems is only one way philosophy might be valuable. Russell points out that philosophy has been productive at making us less certain about things we thought we knew, which is important information.

Compare the unconceived – don’t unchain them

People often criticise me of thinking of potential people as Steven Landsburg describes without necessarily endorsing:

…like prisoners being held in a sort of limbo, unable to break through into the world of the living. If they have rights, then surely we are required to help some of them escape.

Such people seem to believe this position is required for considering creating good lives an activity with positive value. It is not required, and I don’t think of potential people like that. My position is closer to this:

Benefit and harm are comparative notions. If something benefits you, it makes your life better than it would have been, and if something harms you it makes your life worse than it would have been. To determine whether some event benefits or harms you, we have to compare the goodness of your life as it is, given the event, with the goodness it would otherwise have had. The comparison is between your whole life as it is and your whole life as it would have been. We do not have to make the comparison time by time, comparing each particular time in one life with the same time in the other life.

That is John Broome explaining why death harms people even if they hold that all benefit and harm consists of pleasure and pain, which are things that can’t happen when you are dead. The same goes for potential people.

Yes, you can’t do much to a person who doesn’t exist. They don’t somehow suffer imaginary pains. If someone doesn’t exist in any possible worlds I agree they can’t be helped or harmed at all.  What makes it possible to affect a potential person is that there are some worlds where they do exist. It is in the comparison between these worlds and the ones where they don’t exist where I say there is a benefit to them in having one over the other. The benefit of existing consists of the usual things that we hold to benefit a person when they exist; bananas, status, silly conversations, etc. The cost of not existing relative to existing consists of failing to have those benefits, which only exist in the world where the person exists. The cost does not consist of anything that happens in the world where the person doesn’t exist. They don’t have any hypothetical sorrow, boredom or emptiness at missing out. If they did have such things and they mattered somehow, that would be another entirely separate cost.

Often it sounds crazy that a non-existent person could ‘suffer’ a cost because you are thinking of pleasures and pains (or whatever you take to be good or bad) themselves, not of a comparison between these things in different worlds. Non-existent people seem quite capable of not having pleasures or pains, not having fulfilled preferences, not having worthwhile lives, of not having anything at all, of not even having a capacity to have. Existent people are quite capable of having pleasures (and pains) and all that other stuff. If you compare the two of them, is it really so implausible that one has more pleasure than the other?

‘Potential people’ makes people think of non-existing people, but for potential people to matter morally, it’s crucial that they do exist in some worlds (in the future) and not in others. It may be better to think of them as semi-existing people.

I take it that the next counterargument is something like ‘you can’t compare two quantities when one of them is not zero, but just isn’t there. What’s bigger, 3 or … ?’ But you decide what quantities you are comparing. You can choose a quantity that doesn’t have a value in one world if you want. Similarly I could claim all the situations you are happy to compare are not comparable. Getting one hundred dollars would not benefit you, because ‘you without a hundred dollars’ just won’t be around in the world where you get paid. On the other hand if you wanted to compare benefits to Amanda across worlds where she may or may not exist, you could compare ‘how much pleasure is had by Amanda’, and the answer would be zero in worlds where she doesn’t exist. Something makes you prefer an algorithm like ‘find Amanda and see how much pleasure she has got’, where you can just fail at the finding Amanda bit and get confused. The real question is why you would want this latter comparison. I can see why you might be agnostic, waiting for more evidence of which is the  true comparison of importance or something, but I don’t recall hearing any argument for leaping to the non-comparable comparison.

Orange juice 2

Image via Wikipedia

In other cases it is intuitive to compare quantities that have values, even when relevant entities differ between worlds. Would you say I have no more orange juice in my cup if I have a cup full of orange juice than if I don’t have a cup or orange juice? I won’t, because I really just wanted the orange juice. And if you do, I won’t come around to have orange juice with you.

I have talked about this a bit before, but not explained in much detail. I’ll try again if someone tells me why they actually believe the comparison between a good life and not existing should come out neutral or with some non-answer such as ‘undefined’. Or at least points me to where whichever philosophers have best explained this.

Person moments make sense of anthropics

Often people think that various forms of anthropic reasoning require you to change your beliefs in ways other than conditionalizing on evidence. This is false, at least in the cases I know of. I shall talk about Frank Arntzenius‘ paper Some Problems for Conditionalization and Reflection [gated] because it explains the issue well, though I believe his current views agree with mine.

He presents five thought experiments: Two Roads to Shangri La, The Prisoner, John Collins’s Prisoner, Sleeping Beauty and Duplication. In each of them, it seems the (arguably) correct answer violates van Fraassen’s reflection principle, which basically says that if you expect to believe something in the future without having been e.g. hit over the head between now and then, you should believe it now. For instance the thirder position in Sleeping Beauty seems to violate this principle because before the experiment Beauty believes there is a fifty percent chance of heads, and that when she wakes up she will think there is a thirty three percent chance. Arntzenius argued that these seemingly correct answers really are the correct ones, and claimed that they violate the reflection principle because credences can evolve in two ways other than by conditionalization.

First he said credences can shift, for instance through time. I know that tomorrow I will have a higher credence in it being Monday than I do today, and yet it would not be rational for me to increase my credence in it being Monday now on this basis. They can also ‘spread out’. For instance if you know you are in Fairfax today, and that tomorrow a perfect replica of your brain experiencing Fairfax will be made and placed in a vat in Canberra, tomorrow your credence will go from being concentrated in Fairfax to being spread between there and Canberra. This is despite no damage having been done to your own brain. As Arntzenius pointed out, such an evolution of credence looks like quite the opposite of conditionalization, since conditionalization consists of striking out possibilities that your information excludes – it never opens up new possibilities.

I agree that beliefs should evolve in these two ways. However they are both really conditionalization, just obscured. They make sense as conditionalization when you think of them as carried out by different momentary agents, based on the information they infer from their connections to other momentary agents with certain beliefs (e.g. an immediately past self).

Normal cases can be considered this way quite easily. Knowing that you are the momentary agent that followed a few seconds after an agent who knew a certain set of facts about the objective world, and who is (you assume) completely trustworthy, means you can simply update the same prior with those same facts and come to the same conclusion. That is, you don’t really have to do anything. You can treat a stream of moments as a single agent. This is what we usually do.

However sometimes being connected in a certain way to another agent does not make everything that is true for them true for you. Most obviously, if they are a past self and know it is 12 o clock, your connection via being their one second later self means you should exclude worlds where you are not at time 12:00:01. You have still learned from your known relationship to that agent and conditionalized, but you have not learned that what is true of them is true of you, because it isn’t. This is the first way Arntzenius mentioned that credences seem to evolve through time not by by conditionalization.

The second way occurs when one person-moment is at location X, and another person moment has a certain connection to the person at X, but there is more than one possible connection of that sort. For instance when two later people both remember being an earlier person because the earlier person was replicated in some futuristic fashion. Then while the earlier person moment could condition on their exact location, the later one must condition on being in one of several locations connected that way to the earlier person’s location, so their credence spreads over more possibilities than that of the earlier self. If you call one of these later momentary agents the same person as the earlier one, and say they are conditionalizing, it seems they are doing it wrong. But considered as three different momentary people learning from their connections they are just conditionalizing as usual.

What exactly the later momentary people should believe is a matter of debate, but I think that can be framed entirely as a question of what their state spaces and priors look like.

Momentary humans almost always pass lots of information from one to the next, chronologically along chains of memory through non-duplicated people, knowing their approximate distance from one another. So most of the time they can treat themselves as single units who just have to update on any information coming from outside, as I explained. But conditionalization is not specific to these particular biological constructions; and when it is applied to information gained through other connections between agents, the resulting time series of beliefs within one human will end up looking different to that in a chain with no unusual extra connections.

This view also suggests that having cognitive defects, such as memory loss, should not excuse anyone from having credences, as for instance Arntzenius argued it should in his paper Reflections on Sleeping Beauty: “in the face of forced irrational changes in one’s degrees of belief one might do best simply to jettison them altogether”. There is nothing special about credences derived from beliefs of a past agent you identify with. They are just another source of information. If the connection to other momentary agents is different to usual, for instance through forced memory loss, update on it as usual.

If ‘birth’ is worth nothing, births are worth anything

It seems many people think creating a life has zero value. Some believe this because they think the average life contains about the same amount of suffering and satisfaction. Others have more conceptual objections, for instance to the notion that a person who does not exist now, and who will otherwise not exist, can be benefited. So they believe that there is no benefit to creating life, even if it’s likely to be a happy life. The argument I will pose is aimed at the latter group.

As far as I know, most people believe that conditional on someone existing in the future, it is possible to help them or harm them. For instance, suppose I were designing a toy for one year olds, and I knew it would take more than two years to go to market. Most people would not think the unborn state of its users-to-be should give me more moral freedom to cover it with poisonous paint or be negligent about its explosiveness.

If we accept this, then conditional on my choosing to have a child, I can benefit the child. For instance if I choose to have a child, I might then consider staying at home to play with the child. Assume the child will enjoy this. If the original world had zero value to the child, relative to the world where I don’t have the child (because we are assuming that being born is worth nothing), then this new world where the child is born and played with must have positive value to the child relative to the world where it is not born.

On the other hand suppose I had initially assumed that I would stay at home to play with any child I had, before I considered whether to have a child. Then according to the assumption that any birth is worth nothing, the world where I have the child and play with it is worth nothing more than the one where I don’t have it. This is inconsistent with the previous evaluation unless you accept that the value of an outcome may  depend on your steps in imagining it.

Any birth could be conceptually divided into a number of acts in this way: creating a person in some default circumstance, and improving or worsening the circumstances in any number of ways. If there is no reason to treat a particular set of circumstances as a default, any amount of value can be attributed to any birth situation by starting with a different default labelled ‘birth’ and setting it to zero value. If creating life under any circumstances is worth nothing, a specific birth can be given any arbitrary value. This seems  harder to believe, and further from usual intuitions, than believing that creating life usually has a non-zero value.

You might think that I’m unfair to interpret ‘creating life is worth nothing’ as ‘birth and anything that might come along with it is worth nothing’, but this is exactly what is usually claimed. That creating a life is worth nothing, even if you expect it to be happy, however happy. I am most willing to agree that some standard of birth is worth nothing, and all those births in happier circumstances are worth more, and those in worse circumstances worth negative values. This is my usual position, and the one that the people I am debating here object to.

If you believe creating a life is in general worth nothing, do you also believe that a specific birth can be worth any arbitrary amount?