Tag Archives: how to think

Affecting everything

People often argue that X is hugely important because it affects everything else. Sleep is so important because it affects your whole day. You should value your health more than anything because you need it for everything else. And your freedom too. And friends, and food. AI is the most important thing to work on because you could use it to get anything else. Same with anything that makes money, or gains power. Also sociology, because it’s about understanding people, and everything else we care about depends on people’s behaviour. And maths, science, and engineering are more important than anything  because they illuminate the rest of the world, which is the most important thing too. Politics is most important because it determines the policies our country runs under, which affect everything. Law is similar. I assume garbage collectors know they are doing the most important thing because without garbage disposal society would collapse.

It turns out an awful lot of things affect everything, and a lot of them affect a lot of things a lot. That something has a broad influence is certainly a good starting criteria for it being important. It’s just a really low bar. It shouldn’t be the whole reason anyone does science or repairs roads, because it doesn’t distinguish those activities from a huge number of other ones. There is more than one thing that affects everything, because the set of things we might care about are not causally organized like a tree, they are organized like a very loopy web of loops.

A segment of a social network

Even the dots on the right affect everything. Image via Wikipedia

Often this ‘affects everything’ criterion is not even used on any relevant margin. It is used in the sense that if you didn’t have sleep or any understanding of humans at all you would be in a much worse situation than if you had these things in abundance. A better question is whether sleeping another half hour or dedicating your own career to sociology is going to make a huge difference to everything. An even better question is whether it’s going to make an even bigger difference to everything than anything else you could do with that half hour or career. This is pretty well known, and applied in many circumstances, but for some reason it doesn’t stop people arguing from the interconnectedness of everything to the maximal importance of whatever they are doing.

Perhaps it is psychologically useful to have an all purpose excuse for anyone doing anything that contributes at all to our hugely interconnected society to feel like they are doing the most important thing ever. But if you really want to do something unusually useful, you’ll need a stronger criterion than ‘it affects everything’.

How to talk to yourself

Scandinavian Airlines (SAS) airplane on Kiruna...

Image via Wikipedia

Mental module 2: Eeek! Don’t make me go on that airplane! We will surely die! No no no!

Mental module 1: There is less than one in a million chance we die if we get on that airplane, based on actual statistics from as far as you are concerned identical airplanes.

Mental module 2: No!! it’s a big metal box in the sky – that can’t work. Panic! Panic!

Mental module 1: If we didn’t have an incredible pile of data from other big metal boxes in the sky your argument would have non-negligible bearing on the situation.

Mental module 2: but what if it crashes??

Mental module 1: Our lives would be much nicer if you paid attention to probabilities as well as how you feel about outcomes.

Mental module 2: It will shudder and tip over and we will not know how to update our priors on that, and we will be terrified, briefly, before we die!

Mental module 1: If it shuddering and tipping over were actually good evidence the plane was going to crash, there would presently be an incredibly small chance of them occurring, so you need not worry.

Mental module 2: We could crash into the rocks!!! Rocks! In our face! at terminal velocity! And bits of airplane! Do you remember that movie where an airplane crashed? There were bits of burning people everywhere. And what about those pictures you saw on the news? It’s going to be terrible. Even if we survive we will probably be badly injured and in the middle of a jungle, like that girl on that documentary. And what if we get deep vein thrombosis? We might struggle half way out of the jungle on one leg only to get a pulmonary embolism and suddenly die with no hope of medical help, which probably wouldn’t help anyway.

Mental module 1: (realizing something) But Me 2, we identify with being rational, like clever people we respect. Thinking the plane is going to crash is not rational.

Mental module 2: Yeah, rationality! I am so rational. Rationality is the greatest thing, and we care about it infinitely much! Who cares if the plane is really going to crash – I sure won’t believe it will, because that’s not rational!

Mental module 1: (struggling to overcome normal urges) Yes, now you understand.

Mental module 2: and even when it’s falling from the sky I won’t be scared, because that would not be rational! And when we smash into the ground, we will die for rationality! Behold my rationality!

Mental module 1: (to herself and onlookers from non-fictional universes) It may seem reasonable to reason with yourself, but after years of attempting it – just because that’s what come’s naturally – I think doing so relies on a false assumption. Which is that other mental modules are like me somewhere deep down, and will eventually be moved by reasonable arguments, if only they get enough of them to overcome their inferior reasoning skills. Perhaps I have assumed this because I would like it to be true, or just because it is easiest to picture others as being like oneself.

In reality, the assumption is probably false. If part of your brain (or social network) doesn’t respond sensibly to information for the first week – or decade – of your acquaintance, you should be entertaining the possibility that they are completely insane. It is not obvious that well reasoned arguments are the best strategy for dealing with an insane creature, or for that matter with almost any object. Well reasoned arguments are probably not what you use with your ferret or your fire alarm.

Even if the mental module’s arguments are always only a bit flawed and can easily be corrected, resist the temptation to persist in correcting them if it isn’t working. An ongoing stream of slightly inaccurate arguments leading to the same conclusion is a sign that the arguments and the conclusion are causally connected in the wrong direction. In such cases, accuracy is futile.

Mental module 2 is a prime example, alas. She basically just expresses and reacts to emotions connected to whatever has her attention, and jumps to ‘implications’ through superficial associations. She doesn’t really do inference and probability is a foreign concept. The effective ways to cooperate with her then are to distract her with something prompting more convenient emotions, or to direct her attention toward different emotional responses connected to the present issue. Identifying with being rational is a useful trick because it provides a convenient alternative emotional imperative – to follow the directions of the more reasonable part of oneself – in any situation where the irrational mental module can picture a rationalist.

Mental module 2: Oh yes! I’m so rational I tricked myself into being rational!

Estimation is the best we have

This argument seems common to many debates:

‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.

This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.

Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.

A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’

In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best estimate in getting a good value of X, then you in fact believe yourself to have a better estimating-X strategy.

I have criticized this kind of argument before in the specific realm of valuing of human life, but it seems to apply more widely.  Another recent example: people’s attention spans vary between different activities, therefore there is no such thing as an attention span and we shouldn’t try to make it longer. Arguably similar to some lines of ‘people are good at different things, therefore there is no such thing as intelligence and we shouldn’t try to measure it or thereby improve it’.

Probabilistic risk assessment is claimed by some to be impossibly difficult. People are often wrong, and may fail to think of certain contingencies in advance. So if we want to know how prepared to be for a nuclear war for instance, we should do something qualitative with scenarios and the like. This could be a defensible position. Perhaps intuitions can better implicitly assess probabilities via some other activity than explicitly thinking about them. However I have not heard this claim accompanied by any such motivating evidence. Also if this were true, it would likely make sense to convert the qualitative assessments into quantitative ones and aggregate them with information from other sources rather than disregarding quantitative assessments all together.

Futarchy often prompts similar complaints that estimating what we want, so that our laws can provide it, would be impossibly difficult. Again, somehow some representation of what people want has to get into whatever system of government is used, for the result to not be unbelievably hellish. Having a large organization of virtually unknown people make the estimates implicitly in an unknown but messy fashion while they do other things is probably not more accurate than asking people what they want. It seems however that people think of the former as a successful way around the measurement problem, not a way to estimate welfare very poorly. Something similar appears to go on in the other examples. Do people really think this, or do they just feel uneasy making public judgments under uncertainty about anything important?

Know thyself vs. know one another

People often aspire to the ideal of honesty, implicitly including both honesty to themselves and honesty with others. Those who care about it a lot often aim to be as honest as they can bring themselves to be, across circumstances. If the aim is to get correct information to yourself and other people however, I think this approach isn’t the greatest.

There is probably a trade off between being honest with yourself and honest to others, so trying hard to be honest to others only detriments being honest to yourself, which in turn also prevents correct information getting to others.

Why would there be a trade off? Imagine your friend said, ‘I promise that anything you tell me I will repeat to anyone who asks’. How honest would you be with that friend? If you say to yourself that you will report your thoughts to others, why wouldn’t the same effect apply?

Progress in forcing yourself to be honest to others must be somewhat an impediment to being honest to yourself. Being honest with yourself is presumably also a disincentive to your being honest with others later, but that is less of a cost, since if you are dishonest with yourself you are presumably deceiving them about those topics either way.

For example imagine you are wondering what you really think of your friend Errol’s art. If you are committed to truthfully admitting whatever the answer is to Errol or your other friends, it will be pretty tempting to sincerely interpret whatever experience you are having as ‘liking Errol’s art’. This way both you and the others come off deceived. If you were committed to lying in such circumstances, you would at least have the freedom to find out the truth yourself. This seems like the superior option for the truth-loving honesty enthusiast.

This argument relies on the assumptions that you can’t fully consciously control how deluded you are about the contents of your brain, and that the unconscious parts of your mind that control this respond to incentives. These things both seem true to me.

Poverty does not respond to incentives

I wrote a post a while back saying that preventing ‘exploitative’ trade is equivalent to preventing an armed threat by eliminating the ‘not getting shot in the head’ option. Some people countered this argument by saying that it doesn’t account for how others respond. If poor people take the option of being ‘exploited’, they won’t get offered such good alternatives in future as they will if they hold out.

This seems unlikely, but it reminds me of a real difference between these situations. If you forcibly prevent the person with the gun to their head from responding to the threat, the person holding the gun will generally want to escape making the threat, as now she has nothing to gain and everything to lose. The world on the other hand will not relent from making people poor if you prevent the poor people from responding to it.

I wonder if the misintuition about the world treating people better if they can’t give in to its ‘coercion’ is a result of familiarity with how single agent threateners behave in this situation. As a side note, this makes preventing ‘exploitative’ trade worse relative to preventing threatened parties acting on threats.

Is cryonicists’ selfishness distance induced?

Tyler‘s criticism of cryonics, shared by others including me at times:

Why not save someone else’s life instead?

This applies to all consumption, so is hardly a criticism of cryonics, as people pointed out. Tyler elaborated that it just applies to expressive expenditures, which Robin pointed out still didn’t pick out cryonics over the the vast assortment of expressive expenditures that people (who think cryonics is selfish) are happy with. So why does cryonics instinctively seem particularly selfish?

I suspect the psychological reason cryonics stands out as selfish is that we rarely have the opportunity to selfishly splurge on something so far in the far reaches of far mode as cryonics, and far mode is the standard place to exercise our ethics.

Cryonics is about what will happen in a *long time* when you *die*  to give you a *small chance* of waking up in a *socially distant* society in the *far future*, assuming you *widen your concept* of yourself to any *abstract pattern* like the one manifested in your biological brain and also that technology and social institutions *continue their current trends* and you don’t mind losing *peripheral features* such as your body (not to mention cryonics is *cold* and seen to be the preserve of *rich* *weirdos*).

You’re not meant to be selfish in far mode! Freeze a fair princess you are truly in love with or something.  Far mode livens our passion for moral causes and abstract values.  If Robin is right, this is because it’s safe to be ethical about things that won’t affect you yet it still sends signals to those around you about your personality. It’s a truly mean person who won’t even claim someone else a long way away should have been nice fifty years ago.  So when technology brings the potential for far things to affect us more, we mostly don’t have the built in selfishness required to zealously chase the offerings.

This theory predicts that other personal expenditures on far mode items will also seem unusually selfish. Here are some examples of psychologically distant personal expenditures to test this:

  • space tourism
  • donating to/working on life extension because you want to live forever
  • traveling in far away socially distant countries without claiming you are doing it to benefit or respect the locals somehow
  • astronomy for personal gain
  • buying naming rights to stars
  • lottery tickets
  • maintaining personal collections of historical artifacts
  • building statues of yourself to last long after you do
  • recording your life so future people can appreciate you
  • leaving money in your will to do something non-altruistic
  • voting for the party that will benefit you most
  • supporting international policies to benefit your country over others

I’m not sure how selfish these seem compared to other non-altruistic purchases. Many require a lot of money, which makes anything seem selfish I suspect. What do you think?

If this theory is correct, does it mean cryonics is unfairly slighted because of a silly quirk of psychology? No. Your desire to be ethical about far away things is not obviously less real or legitimate than your desire to be selfish about near things, assuming you act on it. If psychological distance really is morally relevant to people, it’s consistent to think cryonics too selfish and most other expenditures not. If you don’t want psychological distance to be morally relevant then you have an inconsistency to resolve, but how you should resolve it isn’t immediately obvious. I suspect however that as soon as you discard cryonics as too selfish you will get out of far mode and use that money on something just as useless to other people and worth less to yourself, but in the realm more fitting for selfishness. If so, you lose out on a better selfish deal for the sake of not having to think about altruism. That’s not altruistic, it’s worse than selfishness.

Explanatory normality fallacy

Only a psychologist thinks to ask why people laugh at jokes.  – Someone (apparently)

A common error in trying to understand human behavior is to think something is explained because it is so intuitively familiar to you. The wrong answer to, ‘I wonder why people laugh at jokes?’ is, ‘They are funny duh’. This is an unrealistically obvious example; it can be harder to see. Why do we like art? Because it’s aesthetically pleasing. Why does sex exist? For reproduction. These are a popular variety of mind projection fallacy. Continue reading

Freedom is slavery

These comparisons are sometimes made as arguments in favor of the former in each pair being forcibly prevented:

  • Selling equity in yourself is like slavery
  • Allowing organ selling is like stealing organs
  • Choosing genetic characteristics of your children is like eugenics
  • Languages dying out is like genocide
  • Selling babies is like slavery, or is like stealing babies and selling them
  • Sweatshops are like slavery
  • Euthenasia is like murder
  • Prostitution is like rape
  • Globalization is like colonialism
  • Any more to add?

The general pattern:

Freely chosen X is like X coerced. And as X coerced is bad, we should prevent X (coercively if need be).

Why is this error prevalent? I suspect it stems from assuming value to be in goods or activities, rather than in the minds of their beholders. Consent is important because it separates those who value something enough to do it and those who don’t. Without the idea that people value things different amounts, consent seems just another nice thing to have, but not functional. If most people wouldn’t make a choice unless forced, then that choice is bad, then others making it should be stopped.

Bought kidneys look like stolen kidneys; can you spot the difference?

Bought kidneys look like stolen kidneys; can you spot the difference?

I wonder if this is related to the misunderstanding that trade must be exploitative, because employers gain and the gain must come from somewhere. This also appears to stem from overlooking the possibility that people place different values on the same things, so extra value can be created by exchange.

This is related.

Mistakes with nonexistent people

Who is better off if you live and I die? Is one morally obliged to go around impregnating women? Is the repugnant conclusion repugnant? Is secret genocide OK? Does it matter if humanity goes extinct? Why shouldn’t we kill people? Is pity for the dead warranted?

All these discussions come down to the same question often: whether to care about the interests of people who don’t exist but could.

I shan’t directly argue either way; care about whatever you like. I want to show that most of the arguments against caring about the non-existent which repeatedly come up in casual discussion rely on two errors.

Here are common arguments (paraphrased from real discussions):

  1. There are infinitely many potential people, so caring about them is utterly impractical.
  2. The utility that a non-existent person experiences is undefined, not zero. You are calculating some amount of utility and attributing it to zero people. This means utility per person is x/0 = undefined.
  3. Causing a person to not exist is a victimless crime. Stop pretending these people are real just because you imagine them!
  4. If someone doesn’t exist, they don’t have preferences, so you can’t fulfil them. This includes not caring if they exist or not. The dead do not suffer, only their friends and relatives do that.
  5. Life alone isn’t worth anything – what matters is what happens in it, so creating a new life is a neutral act.
  6. You can’t be dead. It’s not something you can be. So you can’t say whether life is better.
  7. Potential happiness is immeasurable; the person could have been happy, they could have been sad. Their life doesn’t exist, so it doesn’t have characteristics.
  8. How can you calculate loss of future life? Maybe they’d live another hundred years, if you’re going to imagine they don’t die now.

All of these arguments spring from two misunderstandings:

Thinking of value as being a property of particular circumstances rather than of the comparison between choices of circumstances.

People who won't exist under any of our choices are of no importance (picture: Michelangelo)

People who won't exist under any of our choices are of no importance (picture: Michelangelo)

We need never be concerned with the infinite people who don’t exist. All those who won’t exist under any choice we might make are irrelevant.  The question is whether those who do exist under one choice we can make and don’t exist under another would be better off existing.

2, 3 and 4 make this mistake too. The utility we are talking about accrues in the possible worlds where the person does exist, and has preferences. Saying someone is worse off not existing is saying that in the worlds where they do exist they have more utility. It is not saying that where they don’t exist they experience suffering, or that they can want to exist when they do not.

Assuming there is nothing to be known about something that isn’t the case.

If someone doesn’t exist, you don’t just not know about their preferences. They actually don’t have any. So how can you say anything about them? If a person died now, how can you say anything about how long they would have lived? How good it could have been? It’s all imaginary. This line of thought underlies arguments 4-8.

But in no case are we discussing characteristics of something that doesn’t exist. We are discussing which characteristics are likely in the case where it does exist. This is very different.

If I haven’t made you a cake, the cake doesn’t have characteristics. To ask whether it is chocolate flavoured is silly. You can still guess that conditional on my making it it is more likely chocolate flavoured than fish flavoured. Whether I’ve made it already is irrelevant. Similarly you can guess that if a child were born it would be more likely to find life positive (as most people seem to) and to like music and food and sex and other things it’s likely to be able to get, and not to have an enourmous unsatisfiable desire for six to be prime. You can guess that conditional on someone’s life continuing, it would probably continue until old age. These are the sorts of things we uncontroversially guess all the time about our own futures, which are of course also conditional on choices we make, so I can’t see why they would become a problem when other potential people are involved.

Are there any good arguments that don’t rely on these errors for wanting to ignore those who don’t currently exist in consequentialist calculations?