Tag Archives: consistency

Epistemology of evilness

Most everyone seems to think that a big reason for bad things happening in the world is that some people are bad. Yet I almost never see advice for telling whether you yourself are a bad person, or for what to do about it if you seem to be one. If there are so many bad people, isn’t there a very real risk that you are one of them?

Perhaps the model is one where you automatically know whether you are good or bad, and simply choose which to be. So the only people who are bad are those who want to be bad, and know that they are bad. But then if there is this big population of bad people out there who want to be bad, why is so little of the media devoted to their interests? There’s plenty on how to do all the good things that a good person would want to do, such as voting for the benefit of society, looking after your children, buying gifts, expressing gratitude to friends, holding a respectable dinner, pleasing your partner. Yet so little on scamming the elderly, effectively shaking off useless relatives, lying credibly, making money from investments that others are too squeamish to take, hiding bodies. Are the profit-driven corporate media missing out on a huge opportunity?

If there aren’t a whole lot of knowingly bad people out there who want to be bad, and could use some information and encouragement, then either there aren’t bad people at all, or bad people don’t know that they are bad or don’t want to be bad. The former seems unlikely, by most meanings of ‘bad’. If the latter is true, why are people so blase about the possibility that they themselves might be bad?

***

Prompted by the excellent book Harry Potter and the Methods of Rationality, in which there is much talk of avoiding becoming ‘dark’, in stark contrast to the world that I’m familiar with. If you enjoy talking about HPMOR, and live close to Pittsburgh, come to the next Pittsburgh Less Wrong Meetup.

What’s wrong with advertising?

These two views seem to go together often:

  1. People are consuming too much
  2. The advertising industry makes people want things they wouldn’t otherwise want, worsening the problem

The reasoning behind 1) is usually that consumption requires natural resources, and those resources will run out. It follows from this that less natural-resource intensive consumption is better* i.e. the environmentalist prefers you to spend your money attending a dance or a psychologist than buying new clothes or jet skis, assuming the psychologist and dance organisers don’t spend all their income on clothes and jet skis and such.

How does the advertising industry get people to buy things they wouldn’t otherwise buy? One practice they are commonly accused of is selling dreams, ideals, identities and attitudes along with products. They convince you (at some level) that if you had that champagne your whole life would be that much more classy. So you buy into the dream though you would have walked right past the yellow bubbly liquid.

But doesn’t this just mean they are selling you a less natural-resource-intensive product? The advertisers have packaged the natural-resource intensive drink with a very non-natural-resource intensive thing – classiness – and sold you the two together.

Yes, maybe you have bought a drink you wouldn’t otherwise have bought. But overall this deal seems likely to be a good thing from the environmentalist perspective: it’s hard to just sell pure classiness, but the classy champagne is much less resource intensive per dollar than a similar bottle of unclassy drink, and you were going to spend your dollars on something (effectively – you may have just not earned them, which is equivalent to spending them on leisure).

If the advertiser can manufacture enough classiness for thousands of people with a video camera and some actors, this is probably a more environmentally friendly choice for those after classiness than most of their alternatives, such as ordering stuff in from France. My guess is that in general, buying intangible ideas along with more resource intensive products is better for the environment than the average alternative purchase a given person would make.  There at least seems little reason to think it is worse.

Of course that isn’t the only way advertisers make people want things they wouldn’t otherwise want. Sometimes they manufacture fake intangible things, so that when you get the champagne it doesn’t really make you feel classy. That’s a problem with dishonest people in every industry though. Is there any reason to blame ‘advertisers’ rather than ‘cheats’?

Another thing advertisers do is tell you about things you wouldn’t have thought of wanting otherwise, or remind you of things you had forgotten about. When innovators and entrepreneurs do this we celebrate it. Is there any difference when advertisers do it? Perhaps the problem is that advertisers tend to remind you of resource intensive, material desires more often than they remind you to consume more time with your brother, or to meditate more. This is somewhat at odds with the complaint that they try to sell you dreams and attitudes etc, but perhaps they do a bit of both.

Or perhaps they try to sell you material goods to satisfy longings you would otherwise fulfil non-materially? For instance recommending new clothes where you might otherwise have sought self-confidence through posture or public speaking practice or doing something worthy of respect. Some such effect seems plausible, though I doubt a huge one.

Overall it seems advertisers probably have effects in both directions. It’s not clear to me which is stronger. But insofar as they manage to package up and sell feelings and identities and other intangibles,  those who care for the environment should praise them.

*This is not to suggest that I believe natural resource conservation is particularly important, compared to using human time well for instance.

When to explain

It is commonly claimed that humans’ explicit conscious faculties arose for explaining to others about themselves and their intentions. Similarly when people talk about designing robots that interact with people, they often mention the usefulness of designing such robots to be able to explain to you why it is they changed your investments or rearranged your kitchen.

Perhaps this is a generally useful principle for internally complex units dealing with each other: have some part that keeps an overview of what’s going on inside and can discuss it with others.

If so, the same seems like it should be true of companies. However my experience with companies is that they are often designed specifically to prevent you from being able to get any explanations out of them. Anyone who actually makes decisions regarding you seems to be guarded by layers of people who can’t be held accountable for anything. They can sweetly lament your frustrations, agree that the policies seem unreasonable, sincerely wish you a nice day, and most importantly, have nothing to do with the policies in question and so can’t be expected to justify them or change them based on any arguments or threats you might make.

I wondered why this strategy should be different for companies, and a friend pointed out that companies do often make an effort at more high level explanations of what they are doing, though not necessarily accurate: vision statements, advertisements etc. PR is often the metaphor for how the conscious mind works after all.

So it seems the company strategy is more complex: general explanations coupled with avoidance of being required to make more detailed ones of specific cases and policies. So, is this strategy generally useful? Is it how humans behave? Is it how successful robots will behave?*

Inspired by an interaction with ETS, evidenced lately by PNC and Verizon

*assuming there is more than one

What ‘believing’ usually is

Experimental Philosophy discusses the following experiment. Participants were told a story of Tim, whose wife is cheating on him. He gets a lot of evidence of this, but tells himself it isn’t so.

Participants given this case were then randomly assigned to receive one of the two following questions:

  • Does Tim know that Diane is cheating on him?
  • Does Tim believe that Diane is cheating on him?

Amazingly enough, participants were substantially more inclined to say yes to the question about knowledge than to the question about belief.

This idea that knowledge absolutely requires belief is sometimes held up as one of the last bulwarks of the idea that concepts can be understood in terms of necessary conditions, but now we seem to be getting at least some tentative evidence against it. I’d love to hear what people think.

I’m not surprised – often people say explicitly things like ‘I know X, but I really can’t believe it yet’. This seems uninteresting from the perspective of epistemology. ‘Believe’ in common usage just doesn’t mean the same as what it means in philosophy. Minds are big and complicated, and ‘believing’ is about what you sincerely endorse as the truth, not what seems likely given the information you have. Your ‘beliefs’ are probably related to your information, but also to your emotions and wishes and simplifying assumptions among other things. ‘Knowing’ on the other hand seems to be commonly understood as about your information state. Though not always – for instance ‘I should have known’ usually means ‘in my extreme uncertainty, I should have suspected enough to be wary’. At any rate, in common use knowing and believing are not directly related.

This is further evidence you should be wary of what people ‘believe’.

Compare the unconceived – don’t unchain them

People often criticise me of thinking of potential people as Steven Landsburg describes without necessarily endorsing:

…like prisoners being held in a sort of limbo, unable to break through into the world of the living. If they have rights, then surely we are required to help some of them escape.

Such people seem to believe this position is required for considering creating good lives an activity with positive value. It is not required, and I don’t think of potential people like that. My position is closer to this:

Benefit and harm are comparative notions. If something benefits you, it makes your life better than it would have been, and if something harms you it makes your life worse than it would have been. To determine whether some event benefits or harms you, we have to compare the goodness of your life as it is, given the event, with the goodness it would otherwise have had. The comparison is between your whole life as it is and your whole life as it would have been. We do not have to make the comparison time by time, comparing each particular time in one life with the same time in the other life.

That is John Broome explaining why death harms people even if they hold that all benefit and harm consists of pleasure and pain, which are things that can’t happen when you are dead. The same goes for potential people.

Yes, you can’t do much to a person who doesn’t exist. They don’t somehow suffer imaginary pains. If someone doesn’t exist in any possible worlds I agree they can’t be helped or harmed at all.  What makes it possible to affect a potential person is that there are some worlds where they do exist. It is in the comparison between these worlds and the ones where they don’t exist where I say there is a benefit to them in having one over the other. The benefit of existing consists of the usual things that we hold to benefit a person when they exist; bananas, status, silly conversations, etc. The cost of not existing relative to existing consists of failing to have those benefits, which only exist in the world where the person exists. The cost does not consist of anything that happens in the world where the person doesn’t exist. They don’t have any hypothetical sorrow, boredom or emptiness at missing out. If they did have such things and they mattered somehow, that would be another entirely separate cost.

Often it sounds crazy that a non-existent person could ‘suffer’ a cost because you are thinking of pleasures and pains (or whatever you take to be good or bad) themselves, not of a comparison between these things in different worlds. Non-existent people seem quite capable of not having pleasures or pains, not having fulfilled preferences, not having worthwhile lives, of not having anything at all, of not even having a capacity to have. Existent people are quite capable of having pleasures (and pains) and all that other stuff. If you compare the two of them, is it really so implausible that one has more pleasure than the other?

‘Potential people’ makes people think of non-existing people, but for potential people to matter morally, it’s crucial that they do exist in some worlds (in the future) and not in others. It may be better to think of them as semi-existing people.

I take it that the next counterargument is something like ‘you can’t compare two quantities when one of them is not zero, but just isn’t there. What’s bigger, 3 or … ?’ But you decide what quantities you are comparing. You can choose a quantity that doesn’t have a value in one world if you want. Similarly I could claim all the situations you are happy to compare are not comparable. Getting one hundred dollars would not benefit you, because ‘you without a hundred dollars’ just won’t be around in the world where you get paid. On the other hand if you wanted to compare benefits to Amanda across worlds where she may or may not exist, you could compare ‘how much pleasure is had by Amanda’, and the answer would be zero in worlds where she doesn’t exist. Something makes you prefer an algorithm like ‘find Amanda and see how much pleasure she has got’, where you can just fail at the finding Amanda bit and get confused. The real question is why you would want this latter comparison. I can see why you might be agnostic, waiting for more evidence of which is the  true comparison of importance or something, but I don’t recall hearing any argument for leaping to the non-comparable comparison.

Orange juice 2

Image via Wikipedia

In other cases it is intuitive to compare quantities that have values, even when relevant entities differ between worlds. Would you say I have no more orange juice in my cup if I have a cup full of orange juice than if I don’t have a cup or orange juice? I won’t, because I really just wanted the orange juice. And if you do, I won’t come around to have orange juice with you.

I have talked about this a bit before, but not explained in much detail. I’ll try again if someone tells me why they actually believe the comparison between a good life and not existing should come out neutral or with some non-answer such as ‘undefined’. Or at least points me to where whichever philosophers have best explained this.

If ‘birth’ is worth nothing, births are worth anything

It seems many people think creating a life has zero value. Some believe this because they think the average life contains about the same amount of suffering and satisfaction. Others have more conceptual objections, for instance to the notion that a person who does not exist now, and who will otherwise not exist, can be benefited. So they believe that there is no benefit to creating life, even if it’s likely to be a happy life. The argument I will pose is aimed at the latter group.

As far as I know, most people believe that conditional on someone existing in the future, it is possible to help them or harm them. For instance, suppose I were designing a toy for one year olds, and I knew it would take more than two years to go to market. Most people would not think the unborn state of its users-to-be should give me more moral freedom to cover it with poisonous paint or be negligent about its explosiveness.

If we accept this, then conditional on my choosing to have a child, I can benefit the child. For instance if I choose to have a child, I might then consider staying at home to play with the child. Assume the child will enjoy this. If the original world had zero value to the child, relative to the world where I don’t have the child (because we are assuming that being born is worth nothing), then this new world where the child is born and played with must have positive value to the child relative to the world where it is not born.

On the other hand suppose I had initially assumed that I would stay at home to play with any child I had, before I considered whether to have a child. Then according to the assumption that any birth is worth nothing, the world where I have the child and play with it is worth nothing more than the one where I don’t have it. This is inconsistent with the previous evaluation unless you accept that the value of an outcome may  depend on your steps in imagining it.

Any birth could be conceptually divided into a number of acts in this way: creating a person in some default circumstance, and improving or worsening the circumstances in any number of ways. If there is no reason to treat a particular set of circumstances as a default, any amount of value can be attributed to any birth situation by starting with a different default labelled ‘birth’ and setting it to zero value. If creating life under any circumstances is worth nothing, a specific birth can be given any arbitrary value. This seems  harder to believe, and further from usual intuitions, than believing that creating life usually has a non-zero value.

You might think that I’m unfair to interpret ‘creating life is worth nothing’ as ‘birth and anything that might come along with it is worth nothing’, but this is exactly what is usually claimed. That creating a life is worth nothing, even if you expect it to be happy, however happy. I am most willing to agree that some standard of birth is worth nothing, and all those births in happier circumstances are worth more, and those in worse circumstances worth negative values. This is my usual position, and the one that the people I am debating here object to.

If you believe creating a life is in general worth nothing, do you also believe that a specific birth can be worth any arbitrary amount?

Why can’t a man be more like a woman?

Women are often encouraged to move into male dominated activities, such as engineering. This is not because overall interest in engineering appears to be lacking, but because women’s interest seems to be less than men’s. This is arguably for cultural reasons, so it is argued that culture is inhibiting women from pursuing careers that they may be otherwise suited to and happy with.

If the symptom is that women do less engineering than men, why do we always encourage women to do more engineering, rather than encouraging men to do less? It seems we think men are presently endowed with the perfect level of engineering interest, and women should feel the same, but are impaired by culture.

This could make sense. For instance, perhaps all humans somehow naturally have the socially optimal level of engineering interest, but then insidious cultural influences eat away those interests in women. I think this is roughly how many people model the situation.

This model seems unlikely to be anywhere near the truth. Culture is packed with influences. These influences are not specific to inhibiting women’s impulses to do supposedly masculine things. They tell everyone what sort of people engineers are supposed to be, how much respect a person will get for technical abilities, how much respect they get for wealth, which interests will be taken to indicate the personal qualities they wish to express, which personal qualities are good to express, which cities are most attractive to live in, etc etc etc. Everyone’s level of inclination to be an engineer is significantly composed of cultural influences.

A cacophony of cultural influences may somehow culminate in a socially optimum level of interest in engineering of course. But it is hard to believe that some spectacular invisible mechanism orchestrates this perfect equilibrium for all cultural influences, except those that are gender specific. If there are fleets of rogue cultural influences sabotaging women’s inclinations, this must cast suspicion on the optimality of all other less infamous cultural influences.

Besides the incredible unlikelihood that all cultural influences except gender related ones culminate in a socially optimal level of interest in a given activity, it just doesn’t look like that’s what’s going on. Socially optimal cultural influences would mainly correct for externalities, for instance encouraging activities which help others beyond what the doer would be compensated. But this is not the criterion we use for dealing out respect. It may be part of it, or related to it, but for instance we generally do not respect mothers as much as CEOs, though many people would accept both that mothers have huge benefits often for little compensation and that CEOs are paid more than they are worth. We respect the CEO more probably because it is more impressive to be a CEO.

Incidentally, the correction of cultural influences is another example of expressing pro-female sympathy by encouraging females to do manly things. It seems here we accept that many male jobs are higher status than many female jobs, so to give women more status we would like them to do more of these jobs. Notice that while more men operate garbage trucks, there is less encouragement for women to do that. But my main point here is that we are obsessed with equalising the few cultural influences which are related to gender, while ignoring the sea of other influences which may misdirect both genders equally.

If a gender gap only tells us that either men or women or both have the wrong level of interest in engineering, and we don’t know what the right level is, trying to move women’s interest to equal men’s seems about as likely to be an improvement as it is a deterioration, except to the extent people like equality for its own sake, or where the cultural influences have other effects, such as making women feel less capable or worthy. If we are really concerned about people finding places in the world which suit them and let them make a worthy contribution, we should probably focus on other influences too, rather than being mesmerised by the unfairness of a politically salient discrepancy in influence.

So when people motivate their concern about a gender gap with the thought that there might for instance be capable and potentially interested women out there, missing their calling to be engineers, I can’t feel this is a pressing problem. Without investigating the rest of the cultural influences involved, there might just as easily be capable and potentially interested men out there missing their calling to not be engineers. Or perhaps (as I suspect) both genders should be engineers more often than men are, or more rarely than women are.

The Unpresumptuous Philosopher

Nick Bostrom showed that either position in Extreme Sleeping Beauty seems absurd, then gave a third option. I argued that his third option seems worse than either of the original pair. If I am right there that  the  case  for Bayesian  conditioning without updating on  evidence  fails, we have  a  choice  of  disregarding  Bayesian  conditioning in at least some situations,  or  distrusting the aversion to extreme updates as in Extreme Sleeping Beauty. The latter seems the necessary choice, given the huge disparity in evidence supporting Bayesian conditioning and that supporting these particular intuitions about large updates and strong beliefs.

Notice that both the Halfer and Thirder positions on Extreme Sleeping Beauty have very similar problems. They are seemingly opposed by the same intuitions against extreme certainty in situations where we don’t feel certain, and extreme updates in situations where we hardly feel we have any evidence. Either before or after discovering you are in the first waking, you must be very sure of how the coin came up. And between ignorance of the day and knowledge, you must change your mind drastically. If we must choose one of these positions then, it is not clear which is preferable on these grounds alone.

Now notice that the Thirder position in Extreme Sleeping Beauty is virtually identical to SIA and consequently the Presumptuous Philosopher’s position (as Nick explains, p64). From Anthropic Bias:

 

The Presumptuous Philosopher

39It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion, trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion, trillion, trillion observers. The super-duper symmetry considerations are indifferent between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1 (whereupon the philosopher [...] appeals to SIA)!”

The Presumptuous Philosopher is like the Extreme Sleeping Beauty Thirder because they are both in one of two possible worlds with a known probability of existing, one of which has a much larger population than the other. They are both wondering which of these worlds they are in.

Is the Presumptuous Philosopher really so presumptuous? Analogous to the Extreme Sleeping Beauty Halfer then shall be the Unpresumptuous Philosopher. When the Unpresumptuous Philosopher  learns  there  are  a  trillion  times  as many  observers  in T2  she  remains  cautiously unmoved. However, when the physicists later discover where in the cosmos our planet is under  both  theories,  the  Unpresumptuous  Philosopher  becomes  virtually  certain  that  the sparsely populated T1 is correct while the Presumptuous Philosopher hops back on the fence.

The Presumptuous Philosopher is often chided for being sure the universe is infinite, given there is some chance of an infinite universe existing. It should be noted that this is only as long as he cannot restrict his possible locations in it to any finite region. The Unpresumptuous Philosopher is uncertain under such circumstances. However she believes with probability one that we are in a finite world if she knows her location is within any finite region. For instance if she knows the age  of  her spatially finite universe  she  is  certain  that  it will  not  continue  for  infinitely  long. Here her presumptuous friend is quite unsure.

Statue of an unknown Cynic philosopher from th...

This philosopher has a nice perch now, but where will he go if evidence moves him? Photo: Yair Haklai

It seems to me that as the two positions on Extreme Sleeping Beauty are as unintuitive as each other, the two philosophers seem as presumptuous as each other. The accusation of inducing a large probability shift and encouraging ridiculous certainty is hardly an argument that can be used against the SIA-Thirder-Presumptuous Philosopher position in favor of the SSA-Halfer-Unpresumptuous Philosopher side. Since the Presumptuous Philosopher is usually considered the big argument against SIA, and not considered an argument against SSA at all, an update in favor of SIA is in order.

Where is your moral thermostat?

Old news: humans regard morality as though with a ‘moral thermostat‘.

…we propose a framework suggesting that moral (or immoral) behavior can result from an internal balancing of moral self-worth and the cost inherent in altruistic behavior. In Experiment 1, participants were asked to write a self-relevant story containing words referring to either positive or negative traits. Participants who wrote a story referring to the positive traits donated one fifth as much as those who wrote a story referring to the negative traits. In Experiment 2, we showed that this effect was due specifically to a change in the self-concept. In Experiment 3, we replicated these findings and extended them to cooperative behavior in environmental decision making. We suggest that affirming a moral identity leads people to feel licensed to act immorally. However, when moral identity is threatened, moral behavior is a means to regain some lost self-worth.

This doesn’t appear to always hold though. Most people oscillate happily around a normal level of virtue, eating more salad if they shouted at their child and so on, but some seem to throw consistent effort at particular moral issues, or make firm principles and stick to them.

It seems to me that there are two kinds of moral issues; obligatory and virtuous. Obligatory things include not killing people, wearing clothes in the right places, doing whatever specific duties to god/s you will be eternally tortured for neglecting. Virtuous issues make you feel good and affect your reputation. Doing favours, giving to charities, excercising, eating healthy food, buying environmentally friendly, getting up early, being tidy, offering to wash up, cycling to work. Outside of these two categories there are what I will call ‘practical issues’. These don’t feel related to virtue at all: how to transport a new sofa home, what time to have dinner tonight, which brand of internet to get.

‘Moral thermostat’ behaviour only applies to the virtuous moral behaviours. The obligatory ones and the practical ones demand exactly as much effort as they take, or you feel like putting in respectively. The people who pour effort into specific issues are mostly those who are persuaded that the issue is an obligatory one or a practical one. A clear example of those who push a moral issue into the territory of obligation is of vegetarians, for whatever reason.

Agreeable ways to disable your children

Should parents purposely have deaf children if they prefer them, by selecting deaf embryos?

Those in favor argue that the children need to be deaf to partake in the deaf culture which their parents are keen to share, and that deafness isn’t really a disability. Opponents point out that damaging existing children’s ears is considered pretty nasty and not much different, and that deafness really is a disability since deaf people miss various benefits for lack of an ability.

I think the children are almost certainly worse off if they are chosen to be deaf.  The deaf community is unlikely to be better than any of the millions of other communities in the world which are based mainly on spoken language, so the children are worse off even culture-wise before you look at other costs. I don’t follow why the children can’t be brought up in the deaf community without actually being deaf either. However I don’t think choosing deaf children should be illegal, since parents are under no obligation to have children at all and deaf children are doing a whole lot better than non-existent children.

Should children be brought up using a rare language if a more common one is available?

This is a very similar question: should a person’s ability to receive information be severely impaired if it helps maintain a culture which they are compelled to join due to the now high cost of all other options? The similarity has been pointed out before, to argue that choosing deaf children is fine. The other possible inference is of course that encouraging the survival of unpopular languages is not fine.

There are a few minor differences: a person can learn another language later more easily than they can get their hearing later, though still at great cost. On the other hand, a deaf person can still read material from a much larger group of hearing people, while the person who speaks a rare language is restricted to what is produced by their language group. Nonetheless it looks like they are both overwhelmingly costs to the children involved. It may be understandable that parents want to bring up their children in their own tiny language that they love, but I’m appalled that governments, linguists, schools,  organizations set up for the purpose, various other well meaning parties, and plenty of my friends, think rescuing small languages in general is a wonderful idea, even when the speakers of the language disagree. ‘Language revitalization‘ seems to be almost unanimously praised as a virtuous project.

Continue reading