Tag Archives: practical advice

Motivation on the margin of saving the world

Most people feel that they have certain responsibilities in life. If they achieve those they feel good about themselves, and anything they do beyond that to make the world better is an increasingly imperceptible bonus.

Some people with unusual moral positions or preferences feel responsible for making everything in the world as good as they can make it, and feel bad about the gap between what they achieve and what they could.

In both cases people have a kind of baseline that they care especially about. In the first case they are usually so far above it that nothing they do makes much difference to their feelings. In the second case they are often so far below it that nothing they do makes much difference to their feelings.

Games are engaging when you have a decent chance at both winning and losing. Every move you make matters, so you long to make that one more move. 

I expect the same is true of motivating altruistic consequentialists. I’m not sure how to make achievements on the margin more emotionally salient, but perhaps you do?

Affecting everything

People often argue that X is hugely important because it affects everything else. Sleep is so important because it affects your whole day. You should value your health more than anything because you need it for everything else. And your freedom too. And friends, and food. AI is the most important thing to work on because you could use it to get anything else. Same with anything that makes money, or gains power. Also sociology, because it’s about understanding people, and everything else we care about depends on people’s behaviour. And maths, science, and engineering are more important than anything  because they illuminate the rest of the world, which is the most important thing too. Politics is most important because it determines the policies our country runs under, which affect everything. Law is similar. I assume garbage collectors know they are doing the most important thing because without garbage disposal society would collapse.

It turns out an awful lot of things affect everything, and a lot of them affect a lot of things a lot. That something has a broad influence is certainly a good starting criteria for it being important. It’s just a really low bar. It shouldn’t be the whole reason anyone does science or repairs roads, because it doesn’t distinguish those activities from a huge number of other ones. There is more than one thing that affects everything, because the set of things we might care about are not causally organized like a tree, they are organized like a very loopy web of loops.

A segment of a social network

Even the dots on the right affect everything. Image via Wikipedia

Often this ‘affects everything’ criterion is not even used on any relevant margin. It is used in the sense that if you didn’t have sleep or any understanding of humans at all you would be in a much worse situation than if you had these things in abundance. A better question is whether sleeping another half hour or dedicating your own career to sociology is going to make a huge difference to everything. An even better question is whether it’s going to make an even bigger difference to everything than anything else you could do with that half hour or career. This is pretty well known, and applied in many circumstances, but for some reason it doesn’t stop people arguing from the interconnectedness of everything to the maximal importance of whatever they are doing.

Perhaps it is psychologically useful to have an all purpose excuse for anyone doing anything that contributes at all to our hugely interconnected society to feel like they are doing the most important thing ever. But if you really want to do something unusually useful, you’ll need a stronger criterion than ‘it affects everything’.

Laughing strategy

People who believe that a certain group of other people deserve higher relative status often refuse to laugh at jokes about that group of people. Unfortunately (for them) this tends to make them look like uptight goody-goodies who don’t have a sense of humor; a group whom almost everyone agrees should have low status. Why not instead focus on making up more jokes about the group whose relative status seems too high? It seems like that should have the opposite effect on the campaigners likability, and so also encourage more people to join that side of the fight. What am I missing?

Stop blaming efficiency

Andrew Sullivan, quoting and commenting on Adam Frank:

We’re more efficient than we’ve ever been, but extreme efficiency has drawbacks:

More efficient forestation means running through forests faster. More efficient fishing methods means running through natural fishing stocks faster. … The truth is that we have limits. True connections between family, friends and colleagues can not be compressed down to tightly scheduled “quality time.” The relentless logic of efficiency can unintentionally strip the most valued qualities of human life just as easily as it strips forests.

Under a common meaning, ‘efficiency’ is just getting more of what you want for a given cost. Since people want different things, what is efficient for you may be very inefficient for someone else. If you don’t want deforestation, then my efficient tree harvesting method is not an efficient way to pursue your goals. Often people seem to forget this and think of the fact that other people are efficiently pursuing goals they don’t like as a problem with the concept of efficiency. This can then prompt them to go back and reject the original goal of efficiency in their own endeavours. Which is a very bad idea, if they are hoping to get what they want, without wasting other things they want in the process. Which is very likely what they are hoping for.

For instance if ‘the most valued qualities of human life’ are stripped by spending most of your time say efficiently pursuing career productivity, the problem is not that efficiency is bad, the problem is that you are efficiently pursuing the wrong goals. i.e. goals that are not your own, or at least not all of what you value. Being inefficient about, say, work is a terrible strategy for improving your home life, since only a miniscule proportion of the ways to be inefficient at work involve any home life improvement, and most of those not efficient improvements. Fortunately people using this strategy probably know intuitively that they will have to aim at the set of ways of being inefficient at work that do help their family lives. But once you have got as far as pursuing the values you actually care about, being efficient about them has really got to help, no matter how much your enemies also like efficiency. Similarly, don’t abandon ‘succeeding’, just because bad people also like it.


Added: Another example.

In defence of ignorant thinking

Suppose you want to contribute to the understanding of some subject, but you are presently ignorant about it. Should you do something closer to (a) read everything that’s been written so far, then join in, or (b) think about it yourself a lot before you even look at the basics of what others have come up with?

My guess is closer to (b), though I’m not confident. I’ll tell you why, then you can tell me why I’m wrong if you care to.

Any given topic has many ways to frame it; different assumptions to assume, axioms to emphasise, evidence to notice, questions to ask of it, and aspects to cut out or leave in or smooth out in the abstraction process. Some varieties of each of these things are much more useful than others for making progress, and even the useful ones may help with progress in different directions. When different people approach the same topic, they will do it with a different set of all of these things, because they have different intuitions about it and are familiar with different approaches and other topics. I don’t know of a better, more formal way to try out such things. Once you have understood something complex in the terms set of abstractions etc, it becomes harder to see it in other ways I think, particularly if you have to make up those other ways yourself. So if you start by reading what everyone else has said, you miss out on an opportunity to make a new way to think about it.

Most ways to think about a problem are probably unsuccessful in creating anything new of value. So you might think it’s a tragedy of the commons – it’s better for progress on a subject if each person joining it spends a bit of time at the start trying their own approach before they are familiar with the old work, but it is better for each individual if they just get on with the old work since their own approach probably won’t be any good. But if you do come up with a successful approach, I assume you are duly recompensed with status and glee and that sort of thing.

If eventually we have a perfect general understanding of how to best conceptualise topics, and how to ask the most productive questions and make the best assumptions and so on, then (a). Until then, I’m in favour of a bit of ignorant thinking. What do you think? (assuming your answer is b, or you are an expert on this topic).

One-on-one charity

People care less about large groups of people than individuals, per capita and often in total. People also care more when they are one of very few people who could act, not part of a large group. In many large scale problems, both of these effects combine. For instance climate change is being caused by a vast number of people and will affect a vast number of people. Many poor people could do with help from any of many rich people. Each rich person sees themselves as one of a huge number who could help that mass ‘the poor’.

One strategy a charity could use when both of these problems are present at once is to pair its potential donors and donees one-to-one. They could for instance promise the family of 109 Seventeenth St. that a particular destitute girl is their own personal poor person, and they will not be bothered again (by that organisation) about any other poor people, and that this person will not receive help from anyone else (via that organisation). This would remove both of the aforementioned problems.

If they did this, I think potential donors would feel more concerned about their poor person than they previously felt about the whole bunch of them. I also think they would feel emotionally blackmailed and angry. I expect the latter effects would dominate their reactions. If you agree with my expectations, an interesting question is why it would be considered unfriendly behaviour on the part of the charity. If you don’t, an interesting question is why charities don’t do something like this.

Taking chances with dinner

Splitting up restaurant bills is annoying.

Good friends often avoid this cost by one of them paying for both one time and the other doing it next time, or better yet, by not keeping track of whose turn it is and it evening out in the long term.

Coins before Euro - European Coins In Circulation

Image via Wikipedia

It’s harder to do this with lesser friends and non-friends who one doesn’t anticipate many meals with because one expects to be exploited by a continual stream of free-riders who never offer to pay, or to have to always pay to show everyone that you are not one of those free-riders, or some other annoying equilibrium.

There is an easy way around this. Flip a coin. Whoever loses pays the whole bill.

Why don’t people do this?

Here are some possible reasons, partly inspired by conversations with friends:

 They don’t think of it

Coins have been around a long time.

It’s hard to have a coin that both people agree is random

One person flips and the other calls it?

They are risk averse

Meals are a relatively small cost that people pay extremely often. They should expect a pretty fair distribution in the long run. If the concern is having to pay for fifty people at once when your income is not huge, either restrict the practice to smaller groups or keep the option of opting out open.

Using a randomising method such as a coin displays distrust, which is rude, but not using one would be costly because you don’t actually trust people

A coin could also display your own intention to be fair. And it doesn’t seem like such a big signal of distrust – I would not be offended if someone offered this deal.

Buying meals for others is a friendly and meaningful gesture – being forced to do it upon losing a bet sullies that ideal somehow

Maybe – I don’t know how this would work

Asking makes you look weird

This is an all purpose reason for not doing anything differently. But sometimes people do change social norms – what was special about those times?

Sharing in the bill feels like contributing to something alongside others, which is a better feeling than paying all of it against your will, or than not contributing at all.

Maybe – I feel pretty indifferent about the whole emotional experience personally.

There are many inconvenient small payments that seem like they could be improved by paying a larger amount occasionally with some small probability. Yet I haven’t seen such a method put to use anywhere.

Signaling for a cause

Suppose you have come to agree with an outlandish seeming cause, and wish to promote it. Should you:

a) Join the cause with gusto, affiliating with its other members, wearing its T-shirts, working on its projects, speaking its lingo, taking up the culture and other causes of its followers

b) Be as ordinary as you can in every way, apart from speaking and acting in favour of the cause in a modest fashion

c) Don’t even mention that you support the cause. Engage its supporters in serious debate.

If you saw that a cause had another radical follower, another ordinary person with sympathies for it, or another skeptic who thought it worth engaging, which of these would make you more likely to look into their claims?

What do people usually do when they come to accept a radical cause?

Matching game

Have you have read the overview of this blog? If so, I would be pleased if you would tell me which of the following styles of thought you think closest to that manifested in it:

Estimation is the best we have

This argument seems common to many debates:

‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.

This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.

Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.

A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’

In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best estimate in getting a good value of X, then you in fact believe yourself to have a better estimating-X strategy.

I have criticized this kind of argument before in the specific realm of valuing of human life, but it seems to apply more widely.  Another recent example: people’s attention spans vary between different activities, therefore there is no such thing as an attention span and we shouldn’t try to make it longer. Arguably similar to some lines of ‘people are good at different things, therefore there is no such thing as intelligence and we shouldn’t try to measure it or thereby improve it’.

Probabilistic risk assessment is claimed by some to be impossibly difficult. People are often wrong, and may fail to think of certain contingencies in advance. So if we want to know how prepared to be for a nuclear war for instance, we should do something qualitative with scenarios and the like. This could be a defensible position. Perhaps intuitions can better implicitly assess probabilities via some other activity than explicitly thinking about them. However I have not heard this claim accompanied by any such motivating evidence. Also if this were true, it would likely make sense to convert the qualitative assessments into quantitative ones and aggregate them with information from other sources rather than disregarding quantitative assessments all together.

Futarchy often prompts similar complaints that estimating what we want, so that our laws can provide it, would be impossibly difficult. Again, somehow some representation of what people want has to get into whatever system of government is used, for the result to not be unbelievably hellish. Having a large organization of virtually unknown people make the estimates implicitly in an unknown but messy fashion while they do other things is probably not more accurate than asking people what they want. It seems however that people think of the former as a successful way around the measurement problem, not a way to estimate welfare very poorly. Something similar appears to go on in the other examples. Do people really think this, or do they just feel uneasy making public judgments under uncertainty about anything important?