Tag Archives: ethics

Does SI make everyone look like swimsuit models?

William Easterly believes Sports Illustrated’s swimsuit issue externalises toward women with their ‘relentless marketing of a “swimsuit” young female body type as sex object’. He doesn’t explain how this would happen.

As far as I can tell, the presumed effect is that pictures of women acting as ‘sex objects’ causes men to increase their credence that all other women are ‘sex objects’. I’m a bit puzzled about the causal path toward badness after that, since men do not seem on the whole less friendly when hoping for sex.

I think the important bit here must be about ‘objects’. I have no idea how one films someone as if they are an object. The women in SI don’t look inanimate, if that’s what it’s about. It’s also hard to make robots that good. I will guess that ‘sex object’ means something like ‘low status person to have sex with’, as opposed to just being sexually alluring. It seems unlikely that the concern is that women are taken to be sexier than they really are, so I think the problem is that they are taken to be low status in this particular sexy way.

If I guessed right so far, I think it is true that men increase their expectation that all other women are sex objects when they view videos of women being sex objects. I doubt this is a big effect, since they have masses of much better information about the sexiness and status of women around them. Nonetheless, I agree it is probably an effect.

However as usual, we are focussing on the tiny gender related speck of a much larger issue. Whenever a person has more than one characteristic, they give others the impression that those characteristics tend to go together, externalising to everyone else with those characteristics. When we show male criminals on the news, it is an externality to all other men. When we show clowns with big red noses it is an externality to all other people with big red noses. When I go outside it gives all onlookers a minuscule increase in their expectation that a tallish person will tend to be brown haired, female, dressed foreignly and not in possession of a car.

Most characteristics don’t end up delineating much of an externality, because we mostly don’t bother keeping track of all the expectations we could have connected to tallish people. What makes something like this a stronger effect is the viewers deciding that tallishness is more or less of a worthwhile category to accrue stereotypes about. I expect gender is well and truly forever high on the list of characteristics popularly considered worth stereotyping about, but people who look at everything with the intent of finding and advertising any hint of gender differential implied by it can only make this worse.

Or better. As I pointed out before, while expecting groups to be the same causes externalities, they are smaller ones than if everyone expected everyone to have average human characteristics until they had perfect information about them. If people make more good inferences from other people’s characteristics, they end up sooner treating the sex objects as sex objects and the formidable intellectuals as formidable intellectuals and so forth. So accurately informing people about every way in which the experiences of men and women differ can help others stereotype more accurately. However there are so many other ways to improve accurate categorisation, why obsess over the gender tinged corner of the issue?

In sum, I agree that women who look like ‘sex objects’ increase the expectation by viewers of more women being ‘sex objects’. I think this is a rational and socially useful response on the part of viewers, relative to continuing to believe in a lower rate of sex objects amongst women. I also think it is virtually certain that in any given case the women in question should go on advertising themselves as sex objects, since they clearly produce a lot of benefit for themselves and viewers that way, and the externality is likely minuscule. There is just as much reason to think that any other person categorisable in any way should not do anything low status, since the sex object issue is a small part of a ubiquitous externality. Obsessing over the gender aspect of such externalities (and everything else) probably helps draw attention to gender as a useful categorisation, perhaps ultimately for the best. As is often the case though, if you care about the issue, only being able to see the gender related part of it is probably not useful.

What do you think? Is concern over some women being pictured as sex objects just an example of people looking at a ubiquitous issue and seeing nothing but the absurdly tiny way in which it might affect women more than men sometimes? Or is there some reason it stands apart from every other way that people with multiple characteristics help and harm those who are like them?

Update: Robin Hanson also just responded to Easterly, investigating in more detail the possible causal mechanisms people could be picturing for women in swimsuits causing harm. Easterly responded to him, saying that empirical facts are irrelevant to his claim.

Population ethics and personal identity

Chocolate cake with chocolate frosting topped ...

Photo: Misocrazy

It seems most people think creating a life is a morally neutral thing to do while destroying one is terrible. This is apparently because prior to being alive and contingent on not being born, you can’t want to be alive, and nobody exists to accrue benefits or costs. For those who agree with these explanations, here’s a thought experiment.

The surprise cake thought experiment

You are sleeping dreamlessly. Your friends are eating a most delicious cake. They consider waking you and giving you a slice, before you all go back to sleep. They know you really like waking up in the night to eat delicious cakes with them and will have no trouble getting back to sleep. They are about to wake you when they realize that if they don’t give you the cake you you will be unconscious and thus unable to want to join them, or be helped or harmed. So they finish it themselves. When you awake the next day and are told how they almost wasted their cake on you, are you pleased they did not?

If not, one explanation is that you are a temporally extended creature who was awake and had preferences in the past, and that these things mean you currently have preferences. You still can’t accrue benefits or costs unless you get a bit more conscious, but it usually seems the concern is just whether there is an identity to whom the benefits and costs will apply. As an added benefit, this position would allow you to approve of resuscitating people who have collapsed.

To agree with this requires a notion of personal identity other than ‘collection of person-moments which I choose to define as me’, unless you would find the discretionary boundaries of such collections morally relevant enough to make murder into nothing at all. This kind of personal identity seems needed to make unconscious people who previously existed significantly different from those who have never existed.

It seems very unlikely to me that people have such identities. Nor do I see how it should matter if they did, but that’s another story. Perhaps those of you who think I should better defend my views on population ethics could tell me why I should change my mind on personal identity. These may or may not help.

Estimation is the best we have

This argument seems common to many debates:

‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.

This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.

Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.

A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’

In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best estimate in getting a good value of X, then you in fact believe yourself to have a better estimating-X strategy.

I have criticized this kind of argument before in the specific realm of valuing of human life, but it seems to apply more widely.  Another recent example: people’s attention spans vary between different activities, therefore there is no such thing as an attention span and we shouldn’t try to make it longer. Arguably similar to some lines of ‘people are good at different things, therefore there is no such thing as intelligence and we shouldn’t try to measure it or thereby improve it’.

Probabilistic risk assessment is claimed by some to be impossibly difficult. People are often wrong, and may fail to think of certain contingencies in advance. So if we want to know how prepared to be for a nuclear war for instance, we should do something qualitative with scenarios and the like. This could be a defensible position. Perhaps intuitions can better implicitly assess probabilities via some other activity than explicitly thinking about them. However I have not heard this claim accompanied by any such motivating evidence. Also if this were true, it would likely make sense to convert the qualitative assessments into quantitative ones and aggregate them with information from other sources rather than disregarding quantitative assessments all together.

Futarchy often prompts similar complaints that estimating what we want, so that our laws can provide it, would be impossibly difficult. Again, somehow some representation of what people want has to get into whatever system of government is used, for the result to not be unbelievably hellish. Having a large organization of virtually unknown people make the estimates implicitly in an unknown but messy fashion while they do other things is probably not more accurate than asking people what they want. It seems however that people think of the former as a successful way around the measurement problem, not a way to estimate welfare very poorly. Something similar appears to go on in the other examples. Do people really think this, or do they just feel uneasy making public judgments under uncertainty about anything important?

Why focus on making robots nice?

From Michael Anderson and Susan Leigh Anderson in Scientific American:

Today’s robots…face a host of ethical quandaries that push the boundaries of artificial intelligence, or AI, even in quite ordinary situations.

Imagine being a resident in an assisted-living facility…you ask the robot assistant in the dayroom for the remote …But another resident also wants the remote …The robot decides to hand the remote to her. …This anecdote is an example of an ordinary act of ethical decision making, but for a machine, it is a surprisingly tough feat to pull off.

We believe that the solution is to design robots able to apply ethical principles to new and unanticipated situations… for them to be welcome among us their actions should be perceived as fair, correct or simply kind. Their inventors, then, had better take the ethical ramifications of their programming into account…

It seems there are a lot of articles focussing on the problem that some of the small  decisions robots will make will be ‘ethical’. There are also many fearing that robots may want to do particularly unethical things, such as shoot people.

Working out how to make a robot behave ‘ethically’ in this narrow sense (arguably all behaviour has an ethical dimension) is an odd problem to set apart from the myriad other problems of making a robot behave usefully. Ethics doesn’t appear to pose unique technical problems. The aforementioned scenario is similar to ‘non-ethical’ problems of making a robot prioritise its behaviour. On the other hand, teaching a robot when to give a remote control to a certain woman is not especially generalisable to other ethical issues such as teaching it which sexual connotations it may use in front of children, except in sharing methods so broad as to also include many more non-ethical behaviours.

The authors suggests that robots will follow a few simple absolute ethical rules like Asimov’s. Perhaps this could unite ethical problems as worth considering together. However if robots are given such rules, they will presumably also be following big absolute rules for other things. For instance if ‘ethics’ is so narrowly defined as to include only choices such as when to kill people and how to be fair, there will presumably be other rules about the overall goals when not contemplating murder. These would matter much more than the ‘ethics’. So how to pick big rules and guess their far reaching effects would again not be an ethics-specific issue. On top of that, until anyone is close to a situation where they could be giving a robot such an abstract rule to work from, the design of said robots is so open as to make the question pretty pointless except as a novel way of saying ‘what ethics do I approve of?’.

I agree that it is useful to work out what you value (to some extent) before you program a robot to do it, particularly including overall aims. Similarly I think it’s a good idea to work out where you want to go before you program your driverless car to drive you there. This doesn’t mean there is any eerie issue of getting a car to appreciate highways when it can’t truly experience them. It also doesn’t present you with any problem you didn’t have when you had to drive your own car – it has just become a bit more pressing.

Rainbow Robot

Making rainbows has much in common with other manipulations of water vapor. Image by Jenn and Tony Bot via Flickr

Perhaps, on the contrary, ethical problems are similar in that humans have very nuanced ideas about them and can’t really specify satisfactory general principles to account for them. If the aim is for robots to learn how to behave just from seeing a lot of cases, without being told a rule, perhaps this is a useful category of problems to set apart? No – there are very few things humans deal with that they can specify directly. If a robot wanted to know the complete meaning of almost any word it would have to deal with a similarly complicated mess.

Neither are problems of teaching (narrow) ethics to robots united in being especially important, or important in similar ways, as far as I can tell. If the aim is about something like treating people well, people will be much happier if the robot gives the remote control to anyone rather than ignoring them all until it has finished sweeping the floors than if it gets the question of who to give it to correct. Yet how to get a robot to prioritise floor cleaning below remote allocating at the right times seems an uninteresting technicality, both to me and seemingly to authors of popular articles. It doesn’t excite any ‘ethics’ alarms. It’s like wondering how the control panel will be designed in our teleportation chamber: while the rest of the design is unclear, it’s a pretty uninteresting question. When the design is more clear, to most it will be an uninteresting technical matter. How robots will be ethical or kind is similar, yet it gets a lot of attention.

Why is it so exciting to talk about teaching robots narrow ethics? I have two guesses. One, ethics seems such a deep and human thing, it is engaging to frighten ourselves by associating it with robots. Two, we vastly overestimate the extent to which value of outcomes to reflects the virtue of motives, so we hope robots will be virtuous, whatever their day jobs are.

Poverty does not respond to incentives

I wrote a post a while back saying that preventing ‘exploitative’ trade is equivalent to preventing an armed threat by eliminating the ‘not getting shot in the head’ option. Some people countered this argument by saying that it doesn’t account for how others respond. If poor people take the option of being ‘exploited’, they won’t get offered such good alternatives in future as they will if they hold out.

This seems unlikely, but it reminds me of a real difference between these situations. If you forcibly prevent the person with the gun to their head from responding to the threat, the person holding the gun will generally want to escape making the threat, as now she has nothing to gain and everything to lose. The world on the other hand will not relent from making people poor if you prevent the poor people from responding to it.

I wonder if the misintuition about the world treating people better if they can’t give in to its ‘coercion’ is a result of familiarity with how single agent threateners behave in this situation. As a side note, this makes preventing ‘exploitative’ trade worse relative to preventing threatened parties acting on threats.

Ignorance of non-existent preferences

I often hear it said that since you can’t know what non existent people or creatures want, you can’t count bringing them into existence as a benefit to them even if you guess they will probably like it. For instance Adam Ozimek makes this argument here.

Does this absolute agnosticism about non-existent preferences mean it is also a neutral act to bring someone into existence when you expect them to have a net nasty experience?

Dignity

Dignity is apparently big in parts of ethics, particularly as a reason to stop others doing anything ‘unnatural’ regarding their bodies, such as selling their organs, modifying themselves or reproducing in unusual ways. Dignity apparently belongs to you except that you aren’t allowed to sell it or renounce it. Nobody who finds it important seems keen to give it a precise meaning. So I wondered if there was some definition floating around that would sensibly warrant the claims that dignity is important and is imperiled by futuristic behaviours.

These are the ones I came across variations on often:

The state or quality of being worthy of respect

An innate moral worthiness, often considered specific to homo sapiens.

Being respected by other people is sure handy, but so are all the other things we trade off against one another at our own whims. Money is great too for instance, but it’s no sin to diminish your wealth. Plus plenty of things people already do make other people respect them less, without anyone thinking there’s some ethical case for banning them. Where are the papers condemning being employed as a cleaner, making jokes that aren’t very funny, or drunkenly revealing your embarrassing desires? The mere act of failing to become well read and stylishly dressed is an affront to your personal dignity.

This may seem silly; surely when people argue about dignity in ethics they are talking about the other, higher definition – the innate worthiness that humans have, not some concrete fact about how others treat you. Apparently not though. When people discuss organ donation for instance, there is no increased likelihood of ceasing to be human and losing whatever dollop of inherent worth that comes with it during the operation just because cash was exchanged. Just plain old risk that people will think ill of you if you sell yourself.

The second definition, if it innately applies to humans without consideration for their characteristics, is presumably harder to lose. It’s also impossible to use. How you are treated by people is determined by what those people think of you.  You can have as much immeasurable innate worthiness as you like; you will still be spat on if people disagree with reality, which they probably will with no faculties for perceiving innate moral values. Reality doesn’t offer any perks to being inherently worthy either. So why care if you have this kind of dignity, even if you think such a thing exists?

Paternity tests endanger the unborn

Should paternity testing be compulsory at birth? In discussions of this elsewhere I haven’t seen one set of interests come up: those of children who would not be born if their mothers were faithful. At the start of mandatory paternity testing there would be a round of marriages breaking up at the hospital, but soon unfaithful women would learn to be more careful, and there just wouldn’t be so many children. This is pretty bad for the children who aren’t. Is a life worth more than not being cuckolded? Consider, if you could sit up on a cloud and choose whether to be born or not, knowing that at some point in your life you would be cuckolded if you lived, would you? If so, it looks like you shouldn’t support mandatory paternity testing at the moment. This is of course an annoying side effect of an otherwise fine policy. If incentives for childbearing were suitably high it would not be important, but at the moment the marginal benefit of having a child appears reasonably high, so the population effects of other policies such as this probably overwhelm the benefits of their intentional features.

You may argue that the externalities from people being alive are so great that additional people are a bad thing – if they are a very bad thing then the population effect may still dominate, but mean that the policy is a good idea regardless of the effect on married couples. I haven’t seen a persuasive case for the externalities of a person strongly negative enough to make up for the greatness of being alive, but feel free to point me to any.

Being useless to express care

Imagine you were aiming to appear to care about something or somebody else. One way you could do it is to work out exactly what would help them and do that. What could possibly look like you care about them more? The first problem here is that onlookers might not know what is really helpful, especially if you had to do any work to figure it out. So they won’t recognize your actions as being it. You would do better to do something that most people believe would be helpful than something that you know would.

Another problem arises if everyone knows the thing is helpful to others, but they also know that you could do the same thing to help yourself. From their perspective, you are probably helping yourself. Here you can solve both problems at once by just doing something that credibly doesn’t help you. People will assume there is some purpose, and if it’s not self serving it’s probably for someone else. You can demonstrate care better with actions which are obviously useless to you and plausibly useful to someone else than actions plausibly useful to you and obviously useful to someone else. Fasting to raise awareness for the hungry looks more sincere than eating to raise money for the hungry.

I wonder if this plays a part in choice of political leaning, explaining why economic left wing supporters are taken to be more caring. Left or right wing economic policies could both be argued to help society. However right wing economic policies are also supported by people who want to maintain control of their possessions, while left wing economic policies should not be except by the long term welfare dependent. This means that if you care about expressing care, you should join the left whether right wing policy looks better or worse for everyone overall. Otherwise you will be mistaken for selfish.  If  this is true then the best way to support right wing policy could be to popularise reasons for selfish people to support left wing policy.

Added 9/2/11: Robin Hanson gives more examples of people giving less usefully to show care.

Perfect principles are for bargaining

When people commit to principles, they often consider one transgression ruinous to the whole agenda. Eating a sausage by drunken accident can end years of vegetarianism.

As a child I thought this crazy. Couldn’t vegetarians just eat meat when it was cheap under their rationale? Scrumptious leftovers at our restaurant, otherwise to be thrown away, couldn’t tempt vegetarian kids I knew. It would break their vegetarianism. Break it? Why did the integrity of the whole string of meals matter?  Any given sausage was such a tiny effect.

I eventually found two explanations. First, it’s easier to thwart temptation if you stake the whole deal on every choice. This is similar to betting a thousand dollars that you won’t eat chocolate this month. Second, commitment without gaps makes you seem a nicer, more reliable person to deal with. Viewers can’t necessarily judge the worthiness of each transgression, so they suspect the selectively committed of hypocrisy. Plus everyone can better rely on and trust a person who honors his commitments with less regard to consequence.

There’s another good reason though, which is related to the first. For almost any commitment there are constantly other people saying things like ‘What?! You want me to cook a separate meal because you have some fuzzy notion that there will be slightly less carbon emitted somewhere if you don’t eat this steak?’ Maintaining an ideal requires constantly negotiating with other parties who must suffer for it. Placing a lot of value on unmarred principles gives you a big advantage in these negotiations.

In negotiating generally, it is often useful to arrange visible costs to yourself for relinquishing too much ground. This is to persuade the other party that if they insist on the agreement being in that region, you will truly not be able to make a deal. So they are forced to agree to a position more favorable to you. This is the idea behind arranging for your parents to viciously punish you for smoking with your friends if you don’t want to smoke much. Similarly, attaching a visible large cost – the symbolic sacrifice of your principles – to relieving a friend of cooking tofu persuades your friend that you just can’t eat with them unless they concede. So that whole conversation is avoided, determined in your favor from the outset.

I used to be a vegetarian, and it was much less embarrassing to ask for vegetarian food then than was afterward when  I merely wanted to eat vegetarian most of the time. Not only does absolute commitment get you a better deal, but it allows you to commit to such a position without disrespectfully insisting on sacrificing the other’s interests for a small benefit.

Prompted by The Strategy of Conflict by Thomas Schelling.