Tag Archives: ethics

Value realism

People have different ideas about how valuable things are. Before I was about fifteen the meaning of this was ambiguous. I think I assumed that a tree for instance has some inherent value, and that when one person wants to cut it down and another wants to protect it, they both have messy estimates of what it’s true value is. At least one of them had to be wrong. This was understandable because value was vague or hard to get at or something.

In my year 11 Environmental Science class it finally clicked that there wasn’t anything more to value than those ‘estimates’.  That a tree has some value to an environmentalist, and a different value to a clearfelling proponent. That it doesn’t have a real objective value somewhere inside it. Not even a vague or hard to know value that is estimated by different people’s ‘opinions’. That there is just nothing there. That even if there is something there, there is no way for me to know about it, so the values I deal with every day can’t be that sort. Value had to be a two place function. 

I was somewhat embarrassed to have ever assumed otherwise, and didn’t really think about it again until recently.  It occurred to me recently that a long list of strange things I notice people believing can be explained by the assumption that they disagree with me on whether things have objective values. So I hypothesize that many people believe that value is a one place function which takes a thing as its argument, not a two place function of an agent and a thing.

Here’s my list of strange things. For each I give two explanations: why it is false, and why it is true if you believe in objective values. Note that these are generally beliefs that cause substantial harm:

When two people trade, one of them is almost certainly losing:

Why it’s false: In most cases where two people are willing to trade, this is because the values they assign to the items in question are such that both will gain by having the other person’s item instead of their own.

Why it’s believed: There’s a total amount of value shared somehow between the people’s posessions. Changing the distribution is very likely to harm one party or the other. It follows that people who engage in trade are suspicious, since trades must be mostly characterized by one party exploiting or fooling another.

Trade can be exploitative:

Why it’s false: Assuming exploitation is bad for he who is exploited, if a person chooses to trade, we can assume he is either not exploited or is deceived. Free choice is a filter: it causes people who would benefit from an activity to do it while people who wouldn’t do not.

Why it’s believed: If a person is desperate he might sell his labor for instance at a price below its true value. Since he is forced by circumstance to trade something more valuable for something less valuable, he is effectively robbed.

Prostitution etc should be prevented, because most people wouldn’t want to do it freely, so it must be pushed on those who do it:

Why it’s false: Again, free choice is a filter.

Why it’s believed: If most people wouldn’t be prostitutes, it follows that it is probably quite bad. If a small number of people do want to be prostitutes, they are probably wrong. The alternative is that they are correct, and the rest of society is wrong. It is less likely that a small number of people is correct than a large number. Since these people are wrong, and their being wrong will harm them (most people would really hate to be prostitutes), it is good to prevent them acting on either their false value estimates, or coercion.

If being forced to do X is dreadful, X shouldn’t be allowed:

Why it’s false: Again, choice is a filter. For an arbitrary person doing X, it might terrible, but it is still often good for people who want it. Plus, being forced to do a thing often decreases its value.

Why it’s believed: Very similar to above. The value of X remains the same, regardless of who is thinking about it, whether they are forced to do it. That a person would choose to do a thing others are horrified to have pressed on them, that just indicates that the person is mentally dysfunctional in some way.

Being rich indicates that you are evil:

Why it’s false: Since in every trade, both parties are benefited, being rich indicates that you have contributed to others receiving a large amount of value.

Why it’s believed: Since in every trade, someone wins and someone loses, anyone who has won at trading so many times is evidently an untrustworthy and manipulative character.

Poor countries are poor because rich countries are rich:

Why it’s false: That the rich countries don’t altruistically send a lot of aid into the poor countries is reason perhaps, though it’s not clear that this would help in the long run. Beyond that there’s no obvious connection.

Why it’s believed: There’s a total amount of value to be had in the world. The poor can’t become richer without the rich giving up some value.

The primary result of promotion of products is that people buy things they don’t really want:

Why it’s false: The value of products depends on how people feel about them, so it is possible to create value by changing how people feel about them.

Why it’s believed: Products have a fixed value. Changing your perception of this in the direction of you buying more of them is cheatful sophistry.

***

Questions:

Is my hypothesis right? Do you think of value as a one or two place function? (Or more?) Which of the above beliefs do you hold? Are there legitimate or respectable cases for value realism out there? (Moral realism is arguably a subset).

Is it obvious that pain is very important?

“Never, for any reason on earth, could you wish for an increase of pain. Of pain you could wish only one thing: that it should stop. Nothing in the world was so bad as physical pain. In the face of pain there are no heroes, no heroes [...].  –George Orwell, 1984 via Brian Tomasik , who seems to agree that just considering pain should be enough to tell you that it’s very important.

It seems quite a few people I know consider pain to have some kind of special status of badness, and that preventing it is thus much more important than I think it. I wouldn’t object, except that they apply this in their ethics, rather than just their preferences regarding themselves. For instance arguing that other people shouldn’t have children, because of the possibility of those children suffering pain. I think pain is less important to most people relative to their other values than such negative utilitarians and similar folk believe.

One such argument for the extreme importance of pain is something like ‘it’s obvious’. When you are in a lot of pain, nothing seems more important than stopping that pain. Hell, even when you are in a small amount of pain, mitigating it seems a high priority. When you are looking at something in extreme pain, nothing seems more important than stopping that pain. So pain is just obviously the most important bad thing there is. The feeling of wanting a boat and not having one just can’t compare to pain. The goodness of lying down at the end of a busy day is nothing next to the badness of even relatively small pains.

I hope I do this argument justice, as I don’t have a proper written example of it at hand.

An immediate counter is that when we are not in pain, or directly looking at things in pain, pain doesn’t seem so important. For instance, though many people in the thralls of a hangover consider it to be pretty bad, they are repeatedly willing to trade half a day of hangover for an evening of drunkenness. ‘Ah’, you may say, ‘that’s just evidence that life is bad – so bad that they are desperate to relieve themselves from the torment of their sober existences! So desperate that they can’t think of tomorrow!’. But people have been known to plan drinking events, and even to be in quite good spirits in anticipation of the whole thing.

It is implicit in the argument from ‘pain seems really bad close up’ that pain does not seem so bad from a distance. How then to know whether your near or far assessment is better?

You could say that up close is more accurate, because everything is more accurate with more detail. Yet since this is a comparison between different values, being up close to one relative to others should actually bias the judgement.

Perhaps up close is more accurate because at a distance we do our best not to think about pain, because it is the worst thing there is.

If you are like many people, when you are eating potato chips, you really want to eat more potato chips. Concern for your health, your figure, your experience of nausea all pale into nothing when faced with your drive to eat more potato chips. We don’t take that as good evidence that really deep down you want to eat a lot of potato chips, and you are just avoiding thinking about it all the rest of the time to stop yourself from going crazy. How is that different?

Are there other reasons to pay special attention to the importance of pain to people who are actually experiencing it?

Added: I think I have a very low pain threshold, and am in a lot of pain far more often than most people. I also have bad panic attacks from time to time, which I consider more unpleasant than any pain I have come across, and milder panic attacks frequently. So it’s not that I don’t know what I’m talking about. I agree that suffering comes with (or consists of) an intense urge to stop the suffering ASAP. I just don’t see that this means that I should submit to those urges the rest of the time. To the contrary! It’s bad enough to devote that much time to such obsessions. When I am not in pain I prefer to work on other goals I have, like writing interesting blog posts, rather than say trying to discover better painkillers. I am not willing to experiment with drugs that could help if I think they might interfere with my productivity in other ways. Is that wrong?

Motivation on the margin of saving the world

Most people feel that they have certain responsibilities in life. If they achieve those they feel good about themselves, and anything they do beyond that to make the world better is an increasingly imperceptible bonus.

Some people with unusual moral positions or preferences feel responsible for making everything in the world as good as they can make it, and feel bad about the gap between what they achieve and what they could.

In both cases people have a kind of baseline that they care especially about. In the first case they are usually so far above it that nothing they do makes much difference to their feelings. In the second case they are often so far below it that nothing they do makes much difference to their feelings.

Games are engaging when you have a decent chance at both winning and losing. Every move you make matters, so you long to make that one more move. 

I expect the same is true of motivating altruistic consequentialists. I’m not sure how to make achievements on the margin more emotionally salient, but perhaps you do?

Leaving out the dead

She asked how uncle Freddie was doing. The past few days have been quite bad for him, I said. He was killed by a bus just over a month ago. The first few weeks nothing good happened that he would have missed, but he really would have liked it when the cousins visited. We are thinking about cancelling the wedding. He really would have wanted to be there and the deprivations are getting to be a bit much.

This is a quote from Ben Bradley via Clayton Littlejohn‘s blog. Commenters there agree that postponing the wedding will not help Freddie, but their suggestions about why seem quite implausible to me.

This is really no different to if Freddy was alive but couldn’t come to the wedding because he was busy. Would it be better for him if we cancel it entirely so he wouldn’t be able to come in any case? I hope it is clear enough here that the answer is no. His loss from failing to attend is the comparison between a world where he could attend and the real world. Changing to a different real world where he still can’t attend makes no difference to him in terms of deprivation. This doesn’t involve the controversial questions about how to treat non-existent people. But I think in all relevant ways it is just the same as dead Freddy’s problem.

The apparent trickiness or interestingness of the original problem seems to stem from thinking of Freddy’s loss as being some kind of suffering at some point in time in the real world, rather than a comparison between the real world and some counterfactual one. This prompts confusion because it seems strange to think he is suffering when he doesn’t exist, yet also strange to think that he doesn’t bear some cost from missing out on these things or from being dead.

But really there is no problem here because he is not suffering in the affective sense, the harm to him is just of missing out. It would indeed be strange if he suffered ill feelings, but failing to enjoy a good experience seems well within the capacity of a dead person. And as John Broome has elaborated before – while suffering happens at particular times, harms are comparisons between worlds, perhaps of whole lives, so don’t need to be associated with specific times. My failure to have experienced a first time bungie jumping can’t usefully be said to have occurred at any particular moment, yet it is quite clear that I have failed to experience it. You could say it happens at all moments, but one can really only expect a single first bungie jump, so I can’t claim to suffer from the aggregate loss of failing to experience it at every moment.

You might think of the failure as happening at different moments in different comparisons with worlds where I do bungie jump at different times. This is accurate in some sense, but there is just no need to bother differentiating all those worlds in order to work out if I have suffered that cost. And without trying to specify a time for the failure, you avoid any problems when asked if a person who dies before they would have bungie jumped missed out on bungie jumping. And it becomes easy to say that Freddy suffered a cost from missing the wedding, one that cannot be averted by everyone else missing it too.

***

P.S. If you wonder where I have been lately, the answer is mostly moving to Pittsburgh, via England. I’m at CMU now, and trying to focus more on philosophy topics (my course of study here). If you know of good philosophy blogs, please point them out to me. I am especially interested in ones about ideas, rather than conference dates and other news.

Mediocre masses are not what’s repugnant

The usual repugnant conclusion:

A world of people living very good lives is always less good than some much larger world of people whose lives are only just worth living.

My variant, in brief:

A world containing a number of people living very good lives is always less good than some much larger, longer lived world of people whose lives contain extremes of good and bad that overall add to life being only just worth living.

The usual repugnant conclusion is considered very counterintuitive, so most people disagree with it. Consequently avoiding the repugnant conclusion is often taken as a strong constraint on what a reasonable population ethics could look like (e.g. see this list of ways to amend population ethics, or chapter 17 onwards of Reasons and Persons ). I asked my readers how crazy they thought it was to accept my variant of the repugnant conclusion, relative to the craziness of accepting the usual one. Below are the results so far.

Results from repugnance poll

Poll results

Most people’s intuitions about my variant were quite different from the usual intuition about the repugnant conclusion, with only 21% considering both conclusions about as crazy. Everyone else who made the comparison found my version much more palatable, with 57% of people claiming it was quite sensible or better. These are the reverse of the usual intuition.

This difference demonstrates that the usual intuition about the repugnant conclusion can’t be so easily generalised to ‘large populations of low value lives shouldn’t add up to a lot of value’, which is what the repugnant conclusion is usually taken to suggest. Such a generalization can’t be made because the intuition does not hold in such situations in general. The usual aversion must be about something other than population and the value in each life. Something that we usually abstract away when talking about the repugnant conclusion.

What could it be? I changed several things in my variant, so here are some hypotheses:

Variance: This is the most obvious change. Perhaps our intuitions are not so sensitive to the overall quality of a life as by the heights of the best bits. It’s not the notion of a low average that’s depressing, it’s losing the hope of a high.

Time: I described my large civilization as lasting much longer than my short one, rather than being larger only in space. This could make a difference: as Robin and I noted recently, people feel more positively about populations spread across time than across space. I originally included this change because I thought my own ill feelings toward the repugnant conclusion seemed to be driven in part by the loss of hope for future development that a large non-thriving population brings to mind, though that should not be part of the thought experiment. So that’s another explanation for the time dimension mattering

Respectability/Status: in my variant, the big world people look like respectable, deserving elites, whereas if you picture the repugnant conclusion scenario as a packed subsistance world, they do not. This could make a difference to how valuable their world seems. Most people seem to care much more about respectable, deserving elites than they do about the average person living a subsistance lifestyle. Enjoying First World wealth without sending a lot of it to poor countries almost requires being pretty unconcerned about people who live near subsistance. Could our aversion to the repugnant conclusion merely be a manifestation of that disregard?

Error: Approximately less than 4% of those who looked at my post voted; perhaps they are strange for some reason. Perhaps most of my readers are in favour of accepting all versions of the repugnant conclusion, unlike other people.

Suppose my results really are representative of most people’s intuitions. Something other than the large population of lives barely worth living makes the repugnant conclusion scenario repugnant. Depending on what it is, we might find that intuition more or less worth overruling. For instance if it is just a disrespect for lowly people, we might prefer to give it up. In the mean time, if the repugnant conclusion is repugnant for some unknown reason which is not that it contains a large number of people with mediocre wellbeing, I think we should refrain from taking it as such a strong constraint on ethics regarding populations and their wellbeing.

Is it repugnant?

Derek Parfit‘s ‘Repugnant Conclusion‘ is that for any world of extremely happy and fulfilled people, there is a better world which contains a much larger number of people whose lives are only just worth living. This is a hard to avoid consequence of ethical theories where more of whatever makes life worth living is better. It’s more complicated than that, but population ethicists have had a hard time finding a theory that avoids the repugnant conclusion without implying other crazy seeming things.

Parfit originally pointed out that people whose lives are barely worth living could be living lives of constant very low value, or their lives could have huge highs and lows. He asked us to focus on the first. I’m curious whether normal intuitions differ if we focus on a different form of ‘barely worth living’.

Consider an enormous and very rich civilization. Its members appreciate every detail of their lives very sensitively, and their lives are dramatic. They each regularly experience soaring elation, deep contentment and overpowering sensory pleasure. They are keenly ambitious, and almost always achieve their dreams. Everyone is successful and appreciated, and they are all extremely pleased about that. But these people are also subject to deep depressions, and are easily overcome by fear, rage or jealousy. Sometimes they lie awake at night anguished about their insignificance in the universe and their impending deaths. If they don’t achieve what they hoped they can become overwhelmed by guilt, insecurity, and hurt pride. They soon bounce back, but live is slight fear of those emotions. They also have excruciating migraine headaches when they work too hard. All up, the positives in each person’s action packed life just outweigh the negatives.

Now suppose there is a choice to have a small world of people who only appreciate the pleasures, or a much much larger world like that described above. Perhaps it turns out that the overly pleasured people are unable to be made productive for instance, so we can choose a short future with a large number of people enjoying idle bliss with our saved up resources, or an indefinitely long future with an incredibly much larger number of productive people each enjoying small net positives. How crazy does it seem to prefer the latter at some level of extreme size?

 

I give my interpretation of the results here.

Satisfying preferences by creating them

Sarkology points out that the intuition against it being a good thing to create new lives may be this:

…You are supposed to help people by satisfying their (already fixed and existent) preferences. Not by modifying those preferences to meet reality. Or God forbid, invent those preference ex nihilo.

Could this intuition be correct?

Suppose someone else invents a preference somehow. Lets say they enjoy an evening with a loved one in the presence of the scent of roses, and thus begin a lifelong fondness for that smell. Can you help the person by satisfying this new preference?

If not, you could never help anyone. All preferences are created somehow. So let’s take the usual view that you can help by satisfying preferences others have invented.

What about the person who created the preference? Did he do right or wrong in creating it?

If he did neither right nor wrong, then I could also do neither right or wrong by creating a preference. Then could I do good by fulfilling it? I can’t see why it should matter whether these two acts are done by different people or the same one. If I can do good this way, then why can’t I do good by doing both of these things at once, creating a preference in a situation which also causes it to be fulfilled? If I can do good that way, then the above intuition is wrong.

It could be incorrect to fulfil preferences ‘by’ creating them if creating them is a bad enough act to make up for the good got by fulfilling them. Which would entail that the world would be a better place had many satisfied and happy people not been born, and that having babies is generally a very bad thing to do. I think these things are far more unintuitive than the above intuition being wrong. What do you think?

rose

Image by ღLitle fleaღ via Flickr

Compare the unconceived – don’t unchain them

People often criticise me of thinking of potential people as Steven Landsburg describes without necessarily endorsing:

…like prisoners being held in a sort of limbo, unable to break through into the world of the living. If they have rights, then surely we are required to help some of them escape.

Such people seem to believe this position is required for considering creating good lives an activity with positive value. It is not required, and I don’t think of potential people like that. My position is closer to this:

Benefit and harm are comparative notions. If something benefits you, it makes your life better than it would have been, and if something harms you it makes your life worse than it would have been. To determine whether some event benefits or harms you, we have to compare the goodness of your life as it is, given the event, with the goodness it would otherwise have had. The comparison is between your whole life as it is and your whole life as it would have been. We do not have to make the comparison time by time, comparing each particular time in one life with the same time in the other life.

That is John Broome explaining why death harms people even if they hold that all benefit and harm consists of pleasure and pain, which are things that can’t happen when you are dead. The same goes for potential people.

Yes, you can’t do much to a person who doesn’t exist. They don’t somehow suffer imaginary pains. If someone doesn’t exist in any possible worlds I agree they can’t be helped or harmed at all.  What makes it possible to affect a potential person is that there are some worlds where they do exist. It is in the comparison between these worlds and the ones where they don’t exist where I say there is a benefit to them in having one over the other. The benefit of existing consists of the usual things that we hold to benefit a person when they exist; bananas, status, silly conversations, etc. The cost of not existing relative to existing consists of failing to have those benefits, which only exist in the world where the person exists. The cost does not consist of anything that happens in the world where the person doesn’t exist. They don’t have any hypothetical sorrow, boredom or emptiness at missing out. If they did have such things and they mattered somehow, that would be another entirely separate cost.

Often it sounds crazy that a non-existent person could ‘suffer’ a cost because you are thinking of pleasures and pains (or whatever you take to be good or bad) themselves, not of a comparison between these things in different worlds. Non-existent people seem quite capable of not having pleasures or pains, not having fulfilled preferences, not having worthwhile lives, of not having anything at all, of not even having a capacity to have. Existent people are quite capable of having pleasures (and pains) and all that other stuff. If you compare the two of them, is it really so implausible that one has more pleasure than the other?

‘Potential people’ makes people think of non-existing people, but for potential people to matter morally, it’s crucial that they do exist in some worlds (in the future) and not in others. It may be better to think of them as semi-existing people.

I take it that the next counterargument is something like ‘you can’t compare two quantities when one of them is not zero, but just isn’t there. What’s bigger, 3 or … ?’ But you decide what quantities you are comparing. You can choose a quantity that doesn’t have a value in one world if you want. Similarly I could claim all the situations you are happy to compare are not comparable. Getting one hundred dollars would not benefit you, because ‘you without a hundred dollars’ just won’t be around in the world where you get paid. On the other hand if you wanted to compare benefits to Amanda across worlds where she may or may not exist, you could compare ‘how much pleasure is had by Amanda’, and the answer would be zero in worlds where she doesn’t exist. Something makes you prefer an algorithm like ‘find Amanda and see how much pleasure she has got’, where you can just fail at the finding Amanda bit and get confused. The real question is why you would want this latter comparison. I can see why you might be agnostic, waiting for more evidence of which is the  true comparison of importance or something, but I don’t recall hearing any argument for leaping to the non-comparable comparison.

Orange juice 2

Image via Wikipedia

In other cases it is intuitive to compare quantities that have values, even when relevant entities differ between worlds. Would you say I have no more orange juice in my cup if I have a cup full of orange juice than if I don’t have a cup or orange juice? I won’t, because I really just wanted the orange juice. And if you do, I won’t come around to have orange juice with you.

I have talked about this a bit before, but not explained in much detail. I’ll try again if someone tells me why they actually believe the comparison between a good life and not existing should come out neutral or with some non-answer such as ‘undefined’. Or at least points me to where whichever philosophers have best explained this.

If ‘birth’ is worth nothing, births are worth anything

It seems many people think creating a life has zero value. Some believe this because they think the average life contains about the same amount of suffering and satisfaction. Others have more conceptual objections, for instance to the notion that a person who does not exist now, and who will otherwise not exist, can be benefited. So they believe that there is no benefit to creating life, even if it’s likely to be a happy life. The argument I will pose is aimed at the latter group.

As far as I know, most people believe that conditional on someone existing in the future, it is possible to help them or harm them. For instance, suppose I were designing a toy for one year olds, and I knew it would take more than two years to go to market. Most people would not think the unborn state of its users-to-be should give me more moral freedom to cover it with poisonous paint or be negligent about its explosiveness.

If we accept this, then conditional on my choosing to have a child, I can benefit the child. For instance if I choose to have a child, I might then consider staying at home to play with the child. Assume the child will enjoy this. If the original world had zero value to the child, relative to the world where I don’t have the child (because we are assuming that being born is worth nothing), then this new world where the child is born and played with must have positive value to the child relative to the world where it is not born.

On the other hand suppose I had initially assumed that I would stay at home to play with any child I had, before I considered whether to have a child. Then according to the assumption that any birth is worth nothing, the world where I have the child and play with it is worth nothing more than the one where I don’t have it. This is inconsistent with the previous evaluation unless you accept that the value of an outcome may  depend on your steps in imagining it.

Any birth could be conceptually divided into a number of acts in this way: creating a person in some default circumstance, and improving or worsening the circumstances in any number of ways. If there is no reason to treat a particular set of circumstances as a default, any amount of value can be attributed to any birth situation by starting with a different default labelled ‘birth’ and setting it to zero value. If creating life under any circumstances is worth nothing, a specific birth can be given any arbitrary value. This seems  harder to believe, and further from usual intuitions, than believing that creating life usually has a non-zero value.

You might think that I’m unfair to interpret ‘creating life is worth nothing’ as ‘birth and anything that might come along with it is worth nothing’, but this is exactly what is usually claimed. That creating a life is worth nothing, even if you expect it to be happy, however happy. I am most willing to agree that some standard of birth is worth nothing, and all those births in happier circumstances are worth more, and those in worse circumstances worth negative values. This is my usual position, and the one that the people I am debating here object to.

If you believe creating a life is in general worth nothing, do you also believe that a specific birth can be worth any arbitrary amount?

Why are promisers innocent?

It is generally considered unethical to break promises. It is not considered unethical to make promises you would have been better off not to make. Yet when a promise is made and then broken, there is little reason in the abstract to suppose that either the past promiser or the present promise breaker made a better choice about what the future person should do.

Wedding Photography

Image from icaromoreno

For instance suppose a married woman has an affair. Much moral criticism is usually directed at her for having the affair, yet almost none is directed at her earlier self for marrying her husband in the first place.

It’s not that the later woman, who broke the promise, caused more harm than the earlier woman. Both of their acts were needed together to cause the broken promise. The later woman would have been acting just fine if the earlier woman hadn’t done what she did.

I think we direct all criticism to the later women who breaks the promise because it is very useful to be seen as someone who thinks its important to keep promises. It is of little use to be seen as the sort of person who doesn’t make stupid promises, except as far as it suggests we are more likely to keep promises.

This seems to me a clear case of morality being self serving. It serves others too in this case as usual, but the particular form of it is chosen to help its owner. Which is not particularly surprising if you think morality is a bunch of useful behaviours evolved like all our other self serving bits and pieces. However if you think it is more like maths – something which is actually out there, and we have somehow evolved to be able to intuitively appreciate – it is more surprising that it self serving like this.