Selective social policing

Suppose you really want to punish people who were born on a Tuesday, because you have an irrational loathing of them. Happily you rule the country. But sadly, you only rule it because the voting populace thinks you are kind and just and not at all vindictive and arbitrary. What do you do? One well known solution is to ban stealing IP or jaywalking and then only have the police resources to deal with, er, about a seventh of cases.

I don’t know how often selective policing happens with actual police forces, but my impression is that it is not the main thing going on there. However I sometimes wonder if it is often the main thing going on in amateur social policing. By amateur social policing, I mean for instance a group deciding that Bob is too much of a leach, and shouldn’t be invited to things. Or smaller status fines distributed via private judgmental gossip, such as ‘Eh, I don’t really like Mary. She is always trying to draw attention to herself.’ or ’I can’t believe I dated him. He makes every conflict into an opportunity to talk at length about his own stupid insecurities, and it’s so boring.’

I claim that in many cases if each person enjoyed having Bob around, his apparently being a leach wouldn’t seem like an urgent priority to avoid. And if the speaker got on with Mary, her sometimes attention-seeking behavior wouldn’t be a deal breaker. He might instead feel a little unhappy about the situation on her behalf, and wonder in a friendly way how to stop her embarrassing herself again. And if a woman said she had found a fantastic partner who was a serious candidate as soulmate, and her friend said that she actually knows him, and must warn her: if they ever get in a  conflict, he will talk too much about his insecurities(!), this would seem like a laughably meek warning. Yet it feels like a fine complaint at the end.

I suspect often in such cases, real invisible reasons behind the scenes drive demand for criticism, and then any of the numerous blatantly obvious flaws that a given person has are brought up to satisfy the need.

I sort of believed something like this abstractly—that there are often different standards for different people, for reasons that most people would find unfair. Which at least means policing depends on the apparent crime and also some less legitimate factor. But lately I have wondered if it is almost entirely the latter.

If I am judgmental of someone, there is often a plausible story where I don’t like them for some reason I don’t endorse, and then chose one of their zillion flaws to complain about, because they are at least actually at fault for those. Whereas for people I do like, I’m ok with all zillion flaws, which are after all pretty inconsequential next to the joy I get from whatever nebulous factors actually make me like them. For instance, if I find myself thinking that a person’s personal habits are awful and their intellectual contributions are that magical combination of obvious and completely wrong, I think there is a good chance that something else is up—perhaps they were disrespectful to me once, or someone I was romantically interested in liked them and not me, or they have never smiled at me, or something actually important like that.

It’s not that the flaws aren’t flaws, or that I don’t genuinely disapprove of the flaws. It’s just that my interest in noting them varies a lot based on other factors. Similarly, it’s not that jaywalking isn’t a problem or doesn’t seem like one to those in charge—it’s just that interest in doing something about it is strongly determined by something else.

In law this kind of thing seems problematic. I’m not sure what to think about it in social contexts. Distribution of smiles and party invitations should arguably be allowed to depend on all kinds of unimportant factors that the distributer cares about. Such as prettiness of nose, or probable unattractiveness to crush also present at party. So the first-order effect of the indirection in social cases is to let the social-benefit distributor avoid discussing their silly-but-legitimate reasons, while in the legal case it is often to actually allow punishments to depend silly-and-illegitimate reasons.

One reason I suspect this is that I think people often talk as if they would have thought someone was great, but then they learned that the person has a flaw. Truly shocking news! When I think in the abstract most people would correctly answer a multiple choice question about whether their friends have: a) zero flaws, b) a small number of flaws, or c) too many flaws to count. However I can’t actually remember cases of this, so maybe I made it up.

Effective diversity

1. Two diversity problems

Here are two concerns sometimes raised in Effective Altruist circles:

  1. Effective Altruists are not very diverse—they are disproportionately male, white, technically minded, technically employed, located in a small number of rich places, young, smart, educated, idealistic, inexperienced. Furthermore, because of this lack of diversity, the community as a whole will fail to know about many problems in the world. For instance, problems that are mostly salient if you have spent many years in the world outside of college, if you live in India, if you work in manufacturing, if you have normal human attitudes and norms.
  2. When new people join the Effective Altruism community and want to dedicate a lot of their efforts to effectively doing good, there is not a streamlined process to help them move from any of a wide range of previous activities to an especially effectively altruistic project, even if they really want to. And it gets harder the further away the person begins from the common EA backgrounds: a young San Franciscan math PhD working in tech and playing on the internet can move into research or advocacy or a new startup more easily than an experienced high school principal with a family in Germany can, plus it’s not even clear that the usual activities are a good use of that person’s skills. So less good is done, and furthermore less respect is accorded to such people than they arguably deserve, and so more forbearance is required on their part to stick around (possibly leading to problem 1).

2. Two divergent evaluations of diversity

These concerns are kind of opposite. Which suggests that if they ran into each other they might explode and disappear, at least a bit.

The first concern is based on a picture where people doing different things from the rest of the EA community are an extremely valuable asset, worthy of being a priority to pursue (with effort that could go to figuring out how to stop AI, or paying for malaria nets, or pursuing collaborators from easy-to-pursue demographics).

The second concern is based on a picture where people who are doing different things from the rest of the EA community are worthy mostly as labor that might be redirected to doing something similar to the rest of us. Which is to say that on this picture, the fact that they are doing different things from the rest of us is an active downside.

There are more nuanced pictures that have both issues at once. For instance, maybe it is important to get people with different backgrounds, but not important that they remain doing different things. Maybe because different backgrounds afford different useful knowledge, but this happens fairly quickly, so nearly everything you would learn from spending ten years in the military, you have learned after the first.

I’m not sure if that is what anyone has in mind. I’m also not sure how often the same people hold both of the above concerns. But I do think it doesn’t matter. If some people were worried that EA didn’t have enough apples for some of its projects and some had the concern that it was too hard to turn apples into something useful like oranges, I feel like there should be some way for the former group to end up at least using the apples that the latter group is having trouble making good use of. And similarly for people with a wide range of backgrounds.

On this story, if you come to EA from a far away walk of life, before trying to change what you are doing to be more similar to what other EAs are doing, you might do well to help with with whatever concern (1) is asking for. (And if you aren’t moved by that concern yourself, you might still cooperate with those who do expect value there yet sadly find themselves to be yet more garden variety utilitarian-leaning Soylent-eating twenty-something programmers, who can perhaps give some more of their earnings on your behalf.)

3. A practical suggestion

But what can one do as a (locally) unusual person to help with concern (1) effectively?

I don’t know about effectively, but here’s at least one cheap suggestion to begin with (competing proposals welcome in the comments):


Choose a part of the world that you are especially familiar with relative to other EAs, and tell the rest of us about the ways it might be interesting.
It can be a literal place, an industry, a community, a social scene, a type of endeavor, a kind of problem you have faced, etc.

Here are a bunch of prompts about what I think might be interesting:

  1. What major concerns do people in that place tend to have that EAs might not be familiar with?
    What would they say if you asked what was bad, what was stupid that it hadn’t been solved, what was a gross injustice, what would they do if they had a million dollars?
  2. What major inefficiencies and wrongs do you see in that place?
    What would you do differently there if you were in charge, or designing it from scratch? What is annoying or ridiculous?
  3. Pick a problem that seems bad and tractable. Roughly how bad do you think it is? 
    Maybe do a back of the envelope calculation, especially if you are new to all this and want EAing practice.
  4. Are there maybe-good things to do that aren’t being done? How hard do you think it would be?
    If you can think of something for the problem(s) in Q4, perhaps estimate how efficiently the problem could be solved, on your rough account.
  5. What might be surprising about that part of the world, to those who haven’t spent time there?
  6. How does the official story relate to reality?
    The ‘official story’ might be what you could write in a children’s book or describe in a polite speech. Is it about right, or do things diverge from it? How is reality different? Are the things driving decisions things people can openly talk about?
  7. Are there important concepts, insights, ways of looking at the world, that are common in this place, that you think many EAs don’t know about?
    What is the most useful jargon? 
  8. What unused opportunities exist in the place?
    Who could really use to find this place? What value is being wasted?
  9. What is just roughly going on in the place?
    What are people trying to do? What are the obstacles? Who are the people? What would someone immediately notice?
  10. What are the big differences between there and where most EAs are?
    What do you notice most moving between them?
  11. What do the people in that place do if they want to change things?
    I hear some people do things other than writing blog posts about them, but I’m not super confident, skip this one it if it is confusing.
  12. If the EA movement had been founded in that place, what would it be doing differently?

(If you write something like this, and aren’t sure where to put it or don’t like to publicly post things, ask me.)


The only evidence I have that this would be good is my own intuition and the considerations mentioned above. I expect it to be quick though, and I for one would be interested to read the answers. I hope to post something like this myself later.

Appendix: Are diverse backgrounds actually pragmatically useful in this particular way? (Some miscellaneous thoughts, no great answers)

Diversity is good for many reasons. One might wonder if it is popular for the same range of reasons as usual within EA, and is just often justified on pragmatic EA grounds, because those kinds of grounds are more memetically fertile here.

One way to investigate this is to ask whether diversity has so far brought in good ideas for new things to do. Another is whether the things we currently do seem to be constrained to be close to home. I actually don’t see either of these being big (though could very easily be missing things), but I still think there is probably value to be had in this vicinity.

4.a. Does EA disproportionately think about EA-demographics relevant causes?

I don’t have a lot to say on the first question, but I’ll address the second a bit. The causes EAs think the most about seem to be trivially preventable diseases affecting poor people on the other side of the world, the far future and weird alien minds that might turn it into an unimaginable hellscape, and what it is like to be various different species.

These things seem ‘close to home’ for those people who find themselves mentally inclined to think about things very far from home, and to apply consistent reasoning to them. I feel like calling this a bias is like saying ‘you just found this apparently great investment opportunity because you are the kind of person who is willing to consider lots of different investment opportunities’. This seems like just a mark in our favor.

So while we may still be missing things that would seem even bigger but we just do not know about for demographic reasons, I think it’s not as bad as it might first seem.

My own guess is that EAs miss a lot of small opportunities for efficient value that are only available to people who intimately know a variety of areas, but actually getting to know those areas would be too expensive for the opportunities resulting to remain cheap.

4.b. Is EA influenced in other ways by arbitrary background things?

On the other hand, I think the ways we try to improve the world probably are influenced a lot by our backgrounds. We treat charity as the default way to improve matters for instance. In some sense, giving money to the person best situated to do the thing you want is pretty natural. But in practice charities are a certain kind of institution often, and giving money to people to do things has serious inefficiencies especially when the whole point is that you have unusual values that you want to fulfill, or you think that other people are failing to be efficient in a whole area of life where you want things to happen. And if charity seems as natural to you as to me, is probably because you are familiar with econ 101 or something, and if you had grown up in a very politically-minded climate the most natural thing to do would be to try to influence politics.

I also think our beliefs and assumptions are probably influenced by our backgrounds. For many such influences, I think they are straightforwardly better. For instance, being well-educated, having thought about ethics unusually much, being taught that it is epistemically reasonable to think about stuff on one’s own, and having read a lot of LessWrong just seem good. There are other things that seem more random. For instance, until I came to The Bay everyone around me seemed to think we were headed for environmental catastrophe (unless technology saved us, or destroyed us first), and now everyone around seems to think technology is going to save us or destroy us (unless environmental catastrophe destroys us first), and while these views are nominally the same, one group is spending all of their time trying to spread the word about the impeding environmental catastrophe, while the other adds ‘assuming no catastrophes’ to the end of their questions about superintelligences. And while I think there are probably good cases that can be made about which of these things to worry about, I am not sure that I have seen them made, and my guess is that many people in both groups are trusting those around them, and would have trusted those around them if they had stumbled across the other group and got on well with them socially.

4.c. To what extent could EA be fruitfully influenced by more things?

So far I have talked about behaviors that are more common among other groups of people: causes to forward, interventions to default to, beliefs and assumptions to hold. I could also have talked about customs and attitudes and institutions. This is all stuff we could copy from other people, if we were familiar enough with other people to know what they do and what is worth copying. But there is also value in knowing what is up with other people without copying them. Like, how things fail and how organizations lose direction and what sources of information can be trusted in what ways and which things tend to end up nobody’s job, and which patterns appear across all endeavors for all time. And arguably an inside perspective is more informative than an outside one. For instance, various observations about medicine seem like good clues for one’s overall worldview, though perhaps most of the value there is from looking in detail rather than from the inside. (Whether having a correct worldview effectively contributes to doing good shall remain a question for another time).

Tragedy of the free time commons

Procrastination often seems a bit like an internal tragedy of the commons.

In a tragedy of the commons, a group of people share something nice, like a shared pasture on which to raise their cows. If they together refrained from overusing it they would all benefit (e.g. because it doesn’t become a mud pit), but if any one person alone refrains (e.g. by having a smaller herd of cows), they expect to see little of the benefit themselves, and they probably expect someone else to use more resources (e.g. by adding an additional cow to another herd), so that there isn’t even a shared benefit to the group from the person’s selfless action.

Suppose you have a paper due on Friday, and it is going to take 60 minutes to finish it. Think of yourself over the preceding day as 960 you-minutes. Each you-minute would much prefer the paper be done than not done the following morning, but would somewhat prefer to not work on it themselves. Because these time-slices make their decisions about whether to work one after another, and know what decisions were made in the past, the final N you-minutes in the day will definitely work, if there are N minutes of paper left to write. This means for you-minutes where there are fewer minutes of paper left to write than you-minutes left who might write them, working doesn’t help—it just relieves a later you-minute of working which would otherwise be forced to. And that is certainly worse for the you-minute deciding. So the 60 minutes of writing is done in the last 60 minutes. Which doesn’t destroy any value in this model, so all is good.

But let’s make it more realistic. Suppose that there are better and worse times to work, and which are which is not known ahead of time. Working during a worse minute either produces less than a minute of work, or incurs other costs to the relevant you-minute (e.g. extra suffering). Then instead of everyone doing nothing until exactly sixty minutes before the deadline and then working full time after that, work becomes more worthwhile as you move toward the sixty minute mark, because it becomes increasingly likely that the bad minutes later will not be able to fulfill the work demanded of them. So relatively good you-minutes begin to work sometimes and then less good ones and then close to the deadline even the worst you-minutes work. The you-minutes still mostly work to avoid failing, they don’t mind much if they force bad you-minutes later to work, even if several of them have to work to get a minute’s worth of work done, or even if they endure private suffering as a result. So the early good minutes still don’t work much, and toward midnight many bad minutes work and suffer. This is more of a tragedy of the commons: most minutes freeride, because if they didn’t, someone else would. And all the free-riding causes massive costs.

I think this matches pretty well some elements of procrastination that I see.

This is more complicated than the most straightforward kind of tragedy of the commons—for instance, it involves sequential play—but I don’t know if there is a name this exact kind of game.

Fear or fear?

I wrote recently about how people tend to use the same words—and sometimes concepts—for ‘want’ as in ‘yearn for’ and ‘want’ as in ‘intend’. As in, ‘It’s so lovely here I want to stay forever’, yet ‘I want leave before midnight, because otherwise I will miss my train’. And the trouble this causes.

I think we do something similar with fear. ‘I’m concerned that X’ can mean ‘I feel fear about the possibility of X’ or it can mean ‘I think X (which would be bad) might be true’. I’m not sure which words naturally distinguish these two different messages, but whatever they are, I don’t seem to use them. For instance, what would avoid ambiguity in this sentence? ‘I ……………. that not enough people are going to vote’. I can think of several ways to fill the slot: worry, fear, am concerned, am scared, am frightened, am anxious. But I think they can either be used in both ways, or suggest a more specific kind of feeling.

The ambiguity of these words is especially noticeable if one has unusual levels of anxiety (for instance because of an anxiety disorder, or I suppose because a relaxation disorder). If you try to express a different one to that which people expect, it becomes clear that interpreting such a statement relies on on context. If you are known to usually be anxious, and you say ‘I fear our shoe rack is not large enough for all of our shoes’ you will be misunderstood to mean ‘my heart is pounding and I can’t breathe or think because of this shoe rack’ when you might be more accurately interpreted as, ‘I feel no emotions about the shoe rack, but may I draw your attention to a problem with it?’.

I don’t know if this causes problems, like the ‘want’ case. It is almost the opposite—you are mixing up ‘I have an urge to avoid this thing’ with ‘I judge there to be a problem here’. So you might expect it to go wrong in an analogous way: we feel fear regarding things, and then jump to expensively avoiding them without taking the other stakes into consideration.

It is certainly true that people behave in this way sometimes. For instance, once I was putting a golden necklace on in a dark car, and when I touched my hands to my neck I found a giant hairy spider on it. I jumped to expensively avoiding the spider in every sense, and did not find the necklace again. My mistake was neglecting to take into account the value of the necklace to me alongside my aversion to having a spider on my neck. I think there are more drawn out examples too. However I am not sure I have ever seen someone behave in this way due to confusion in the use of concepts, whereas with ‘want’ I think I have.

I wonder if more generally people often just use the same words for both ‘I have emotion Y about X’ and ‘my considered attitude toward X is the same as the one I might have if I had emotion Y’.  ‘I regret…’, ‘I’m sorry…’, ‘I hope’, and ‘I trust’, seem arguably like this too, but I’m not sure about others.

As I said before, I’m inclined to infer from the fact that people don’t really have good language to distinguish two things that they haven’t historically distinguished the things much. In this case for instance, I suppose that people have mostly treated feeling fear as identical to having the considered position that a thing is risky. This sort of thing would make sense in the design of a creature whose emotions basically track all of the relevant considerations. Or at least as many relevant considerations as any other part of its brain might usefully track. My guess is that we used to be much more like this for various reasons, and now are less so.

In the case of fear, perhaps we used to be in situations where our natural terror regarding aggressive animals for instance directed us well, whereas now we just tend to be too scared of snakes and sharks and not scared enough about heart disease or cars. At the same time, our intellectual faculties have grown into elaborate science and technology that can usefully track things like heart disease and build things like cars.

I have long thought that people often almost accidentally take their feelings to be their considered positions, without having an extra step of considering them. I take this as a bit more evidence, but it is also possible that my earlier theory just made this kind of  observation stand out, and there are lots of observations in the world to observe.

Want like want want

“I want a donut”

“Ok, I’ll buy one for you”

“Oh, I don’t mean that on consideration I endorse purchasing one—I’m just expressing my urge to to eat a donut.”

There are two meanings of ‘want’ in common usage. A feeling of desire, and an endorsed intention. ‘I want a baby tiger!’ is not analogous to ‘I want to work more on my taxes tonight’.

These ‘want’s are basically the input of a decision process and the output. I feel desire for a baby tiger and utter ‘I want it!’, and my brain considers that desire plus some other stuff about baby tigers and my life, and decides that on reflection I do not intend to acquire one. On the other hand, I feel no positive attraction toward my taxes at all, yet my aversion to prison and lawyers and generally being disagreeable in any way, once fed through my decision process leave me ‘wanting’ to work on them.

It is hard to be confused about the baby tiger and tax cases, but other times I think this leads to genuine confusion. The donut case above was a genuine confusion, but one of no importance. I think it leads to more important genuine confusions when one talks to oneself, and lacks two distinct concepts.

Luckily I waited ages before writing this blog post, and so came across David Wong of Cracked talking about something similar, as the #1 Way you are sabotaging your own life (without knowing it), which I’ll just quote here in full (the section, not the whole post, though I don’t agree with all of):

#1. Lying to Yourself About What You Actually Want


Hlib Shabashnyi/iStock/Getty Images

Off the top of your head, say something you’ve always wanted to do. Then, follow it up with why you’ve never done it.

So, maybe you said something like, “I’ve always wanted to start a little business selling cupcakes! But I wouldn’t even know how to get started!”

Aaaaand … 90 percent of you just lied.

I know you did, because if you actually wanted to do the thing, then the second part — the obstacle — wouldn’t exist. For example, if that person up there actually wanted to start their cupcake business, they wouldn’t be confused about how to get started. They’d be a freaking walking encyclopedia of information about how to get started, because they’d have spent every single day reading up on it and calling other cupcake-shop owners for advice. They don’t do that because they don’t actually want it. They don’t have the invisible gun to their head.


BrandlMichaela/iStock/Getty Images
“The cupcake is a lie.”

This, right here, is at the heart of every unfulfilled ambition in your life. We use the same word — “want” — to mean two completely different things, and the constant confusion between those definitions is why so many people are disappointed in how their lives turned out. Depending on the context, “want” can be:

A) A statement of intended action (“I want to mow the lawn before it rains.”)

B) A statement of general preference (“I want everyone to live a long and happy life.”)

It sounds simple enough, but the confusion of those two uses of the word is everything. We switch between the two definitions sometimes in the same sentence. This morning, I was driving to Five Guys to get a burger and an entire grocery bag full of french fries to go with it (that is, the “small”). I passed a guy who was jogging, shirtless, who had a torso like Matthew McConaughey. I said to myself, “I want a body like that!” And, if I’d pulled over and asked the guy why he runs and works out, he’d have said the same thing, almost word-for-word — “It’s because I want a body like this!”

Same phrasing, meaning two completely different things. I used “want” in the same way I say I want world peace — a wistful statement about something I actually have no control over. If it’s the same effort either way, sure, I’ll take the rock-hard abs — give me an ab pill and I’ll swallow it. Otherwise, no, it ain’t happening. That jogging guy, on the other hand, used “want” as a statement of intended action — he “wants” to run five miles every day because he “wants” to be fit.


desertsolitaire/iStock/Getty Images
“Also because there’s a guy with a gun pointed at me. Please, call the police.”

Now look around you — look at all of the minimum-wage people who “want” to be rich and/or famous, with some vague notion of, I don’t know, being on a reality show some day or getting “discovered” for some talent they didn’t know they had. Now look at all of the MBAs working 100-hour weeks on the trading floor because they “want” to be rich. The difference in the two is night and day, but in many cases the former group doesn’t realize it. They just stay poor while the other group starts shopping for vacation homes.

And I’m starting to think that the world really is divided between those who have a clear idea of what it means to want something — including the total cost and sacrifices it will take to get it — and those who are just content to leave it as an airy “wouldn’t it be nice” fantasy. The former group hones in on what they want and goes zooming after it like a shark. The latter looks at them, shakes their head and says, “How do they do it?” As if they have a cheat code, or a secret technique.


Eldad Carin/iStock/Getty Images
“That son of bitch and his Konami code.”

“What, you’re saying we should all be douchebag stockbrokers working hundred-hour weeks?” No. I’m saying that while some of you are sitting around the coffee shop talking about how you “want” the system to change, that douchebag is accumulating money so he can actually run for congress. Because when he “wants” something, he doesn’t sing a song about it. He prices that shit and makes a down payment. And when that relentless BMW-driving douche has kids, he’ll teach them, too, what it really means to “want” something — to be single-minded, and voracious, and to pursue it to the ends of the Earth. Instilling that lesson goes just as far toward preserving wealth and power in a group as the actual inheritance they’ll leave behind.

Are you scared of those people? Are you imagining them as cold-blooded stock brokers and lobbyists and swindlers, the Wolf of Wall Street types who are eating away at the world like a cancer? Well, they scare you because it’s a glimpse at what accomplishing great things actually costs. You know Steve Jobs was a fucking psychopath, right? So the next time somebody asks you if you want to be rich, really stop and think about it. Think about what it will take. Think about what kind of person you’ll need to become.


David Paul Morris/Getty Images News/Getty Images
“I would literally make them from the blood of orphans if it could save me five cents on the per-unit cost.”

And that’s the point of all this — I’ve found, as time goes on, that everybody gets what they want. Not what they say they want in order to make themselves look good to others, or what they tell themselves they want so they feel better about the current state of their life. No, I’m talking about what they really want. And to find out what they really want, you don’t need to ask them. You just need to look at what they did today. You want to change, start there.

He has a more complicated thesis, and is saying things I don’t necessarily agree with, but his central point is that mistaking one kind of ‘want’ for another is something like the number one way you are messing up your life, which suggests that he considers it an important confusion.

He thinks it is related to the strange discrepancy between our imagined prospects in five years, and what we do right now. You say to yourself ‘I want to be a classical guitarist’—and it arises as an idle positive urge toward playing classical guitar, which arises due to letting your mind wander while listening to classical guitar one time. Then you just kind of figure that you do in fact want to, but don’t know how to right now or there are some obstacles or something, and hopefully you’ll figure it out in the vague future and probably one day be a great classical guitarist.

I claim that part of what is going on here is that you are observing your urge to play classical guitar, and thinking of it as ‘I want to play classical guitar’. Then you fail to distinguish this input to a decision from an actual decision. So either you sign up for a classical guitar class, but feel kind of bad about it and like you have too many things going on in your life, or you say to yourself that you want to, but you don’t, and you figure there is something wrong with you.

So maybe you ask yourself, ‘do I really want to play classical guitar?’ and you look inside your heart and see that you do really feel warmly about playing classical guitar, and you don’t notice that that is the answer to a different question than the one that is relevant to whether you should play classical guitar. You are checking that the input really is ‘classical guitar is nice’, rather than that the output really is ‘learning to play classical guitar is what I want on net, given the costs, and that it is nice’.

In some sense, if you fail to distinguish the input to your decision from the output, and confusedly use the input where you would rationally use the output, you are in fact correct that they are the same. It’s just that you are missing out on making a decision where you could benefit from doing so. Like, if you mistakenly treat ‘potato chips taste delicious’ as logically identical to ‘I endorse eating potato chips’ because you call them both “I want to eat potato chips!”, then you are missing out on a great chance to take into account considerations other than the flavor of potato chips in your diet, some of which may be important to you.

I have seen myself making this error. I remember it happening when I’m mostly thinking about something else, but idly appreciating something in my surrounds. For instance, I’m likely to see a cool startup and think ‘mmm, yeah I should have a startup’ or see a nice blouse and think ‘ooh, I should get a blouse like that’. And if I was really thinking about the issue, I would remember that things have costs, but if I’m not then the part of my brain that says ‘ooh’ at stuff also registers them as tentative decisions. I go away assuming that I now intend some day to have a cool startup.

This seems to parallel other useful distinctions I have seen people talk about in recent history. ‘Impressions’ and ‘beliefs’, and ‘what words are being said in my head’ versus ‘what I believe and stand for’ for instance. These are similarly inputs and outputs of decision processes, and by naming them as such, we can remember to actually stick a decision process between them. Much like if we label ‘raw pasta’ and ‘cooked pasta’ different things, it is easier to notice that cooking is an important step, and we are less likely to end up with pasta that is weirdly ill suited to being eaten half the time. Instead of just citing our impressions as beliefs, and then arguing with other people, or being confused that our ‘beliefs’ aren’t updating when other people tell us theirs (and we can tell they aren’t, by checking our impressions), we can just have some impressions and then consider them when deciding what we believe. To me these distinctions sound so obvious in retrospect that it is weird to even hypothesize that a moment ago you might not have made them. But relatedly, I think pointing them out has been pretty useful.

I’m not sure why we don’t make clear conceptual distinctions in these cases. Perhaps making distinctions is just hard. I think maybe these things are not so obvious because we didn’t always do so much intelligent decision-making. Presumably in goodest oldest days, ‘this food tastes good’ was closer in meaning to ‘I have decided to eat this food’ and ‘I feel like this plan is going to fail’ was closer in meaning to ‘this plan is probably going to fail’, and it is only later that considerations like health at age fifty and outside view evidence became sufficiently worth manually adding to one’s decision calculus to bother having a decision calculus to add them to, beyond unreflective feelings and intuitions.

Wong’s post also discusses people’s failure to connect their plans with the costs of those plans. They plan to learn the classical guitar, but don’t think of themselves as ‘planning to learn the classical guitar instead of spending so much time with their friends’ or ‘planning to learn the classical guitar instead of spending one of their two daily outside-of-work intentional-action-slots on reading’ or whatever the bottleneck may be. I don’t think he explicitly connects the two, though he clearly thinks they are part of a larger related structure. They are at least related in that people are wrong about what they want in part because they are not considering the costs. In his story, he wants to be fit, but only if he completely ignores the costs. Which is to say, if he just checks whether he likes the idea of being fit. He describes being confused, and going on to tell himself that he wanted to be fit in the ‘intending to do it’ sense, while mysteriously not being inclined to.

My model gives a straightforward reason for these errors to be related. Suppose you are making a cost-benefit analysis. If you confuse a single entry in the ‘benefits’ column with the output of the entire analysis, this reliably undercounts costs, to put it mildly. That is, if you ‘decide’ to learn Japanese by just using ‘liking the idea of knowing Japanese’ as a proxy for making a decision, then your decisions will be independent of what will be lost by learning Japanese. You will do the same thing whether it takes three hours or three million hours to learn Japanese. And you will find yourself with some very bad decisions that you can’t bring yourself to actually uphold.

I was told at a CFAR workshop that it is useful to say to yourself ‘if I write this paper it is going to be super annoying, but worth it’ rather than just ‘writing this paper is worth it’. This seems probably related. For instance, maybe you are just using ‘I want a tiger’ to decide to acquire one, and then if you say ‘I want a tiger, but it will be very messy, oh wait maybe I don’t’ or ‘I want a tiger, and it will be very messy, and still worth it’ then in the latter case you believe yourself more, or something.

Anyway, maybe more distinct terms would be helpful. For now I’m going to use ‘yearn for’ and ‘intend’, like ‘I yearn for a donut but I don’t intend to get one’. Better suggestions welcome.