Threat erosion

Often groups of people behave in consistent ways for a long time because they share a belief that the consistent way everyone is behaving will cause things to be bad for any individual who deviates from it.

For instance, there is a line in the sandwich shop. From a perspective so naive to our ubiquitous norms that it is hard to imagine, you might wonder why the person standing at the back does so, when the shopkeeper is much more likely to get sandwiches for people at the front. The reason of course is that if he were to position himself in the ample physical space between the person at the front and the shopkeeper, there would be some kind of uproar. Not only would the person at the front be angry, but everyone in the line would back them up, and the shopkeeper probably wouldn’t even grant a sandwich to the line-jumper.

So key to our ubiquitous tendency to stand peacefully in line is the fact that our common behavior is ‘stand in line and get angry with anyone who jumps it’ not just ‘stand in line’ which would be immediately exploited until the norm upholder gave up or died of starvation.

An interesting thing about this extra clause is that it is about our hypothetical behavior in circumstances that rarely happen. If our norms work well enough, we might go for years all peacefully standing in line, without anyone ever trying to push in at the front, because why would they?

An upshot is that if serious norm violations are rare, people might become pragmatically ill-equipped to respond to them. They might forget how to, or they might stop having the right resources to do so, physical or institutional. Or if generations are passing with no violations, the new generation might just fail to ever learn that they are meant to respond to violations, or learn what that would look like, since they never observe it. And maybe nobody notices any of this until norms are being violated and they find they have no response.

For instance, suppose that occasionally people sort of wander toward the front of the line in ambiguous circumstances, hoping to evade punishment by feigning innocent confusion. And those in the line always loudly point out the ‘error’ and the room scowls and the person is virtually always scared into getting in line. But one day someone just blatantly walks up to the front of the line. People point out the ‘error’ but the person says it is not an error: they are skipping the line.

The people in the line have never seen this. They only have experience quietly mentioning that they observe a possible norm violation, because that has always been plenty threatening. Everyone has become so used to believing that there is terrifying weaponry ready to be pulled out if there really were a real norm violation, that nobody has any experience pulling it out.

And perhaps it has been so long since anyone did pull it out that the specific weapons they stashed away for this wouldn’t even work any more. Maybe the threat used to be that everyone watching would gossip to others in the town about how bad you were. But now in a modern sandwich shop in a large city, that isn’t even a threat.

The world is full of sufficiently different people that in the real world, maybe someone would just punch you in the face. But it seems easy to imagine a case where nobody does anything. Where they haven’t been in this situation for so long, they can’t remember whether there is another clause in their shared behavior pattern that says if you punch someone because they got in line in front of you at the sandwich shop that you should be punished too.

Does this sort of erosion of unexercised threats actually happen? I am not sure. I think of it if for instance a politician behaves badly in ways that nobody had even thought of because they are so obviously not what you do; and then get away with it while onlookers are like ‘wait, what?! You can’t do that!’ But I don’t know enough details of such cases to judge whether they are cases of threat erosion.

Another case where I guess people might experience this is in bringing up children, because threats of punishment are often made, and the details of the relationship is changing as the children change, so if you don’t exercise the threats they can cease to be relevant without you noticing.

I probably saw something close to this in looking after my brothers. My brothers were into fighting for fairness and justice, where ‘fairness’ is about one’s right to have stuff someone else has, and ‘justice’ is about meeting all slights with tireless bloodthirsty revenge. So my main task was dealing with fights, and threats were relevant. When my brothers were small, I was physically in control of what they could have or eat or punch, so could uphold threats. Later they were really big and I couldn’t have enforced any threats if they decided to make it a physical conflict. This hadn’t been strongly tested by the time it dawned on me, and I continued to bluff. But there was perhaps an earlier period when I didn’t realize I was now bluffing, where if they had realized first, I would have been left with no response. This isn’t quite a case, because noticing that my threats were decaying didn’t let me strengthen them. But had I been an adult in charge of money and the car and such, things may have been different.

I’m not sure if such a situation would last for long in important cases where a replacement threat is feasible. If violations are still clearly bad for a set of people who have general resources to invest in threats, I’d often expect a new and compelling response to be invented in time.

Being useless to show effectiveness

From an email discussion (lightly edited):

I actually think an important related dynamic, in the world at large more than EA, is people favoring actions that are verifiably useless to themselves over ones that are probably useful to others but also maybe useful to themselves. I blogged about this here a while ago. In short, I see this as a signaling problem. The undesirable action (destroying resources in an evidently useless way) is intended to signal that you are not bad. Bad people (greedy exploiters trying to steal everyone else’s stuff) can make themselves look just like effective good people (both do things that look high leverage and where it is not totally clear what the levers are ultimately pushing). So the bad people do that, because it beats looking bad. Then there is no signal that the effective good people can send to distinguish themselves from the bad people. So people who want to not look bad have to look ineffective instead.

A way something like this might happen in our vicinity e.g. if I genuinely guess that the most effective thing to do might be for me to buy a delicious drink and then sit still in a comfy place for the day and think about human coordination in the abstract. However this is much like what a selfish version of me might do. So if I want to not humiliate myself by seeming like a cheating free-rider liar motivated reasoner in front of the other EAs, or perhaps if I just experience too much doubt about my own motives or even if I just want to make it straightforward for others around to know they can trust me, perhaps I should instead work for a reputable EA org or earn money in an annoying way and give it to someone far away from me.

On this model, the situation would be improved by a way to demonstrate that one is effective-good rather than effective-evil. (As in, a second sense in which it is a signaling problem is that adding a good way to signal would make it better).

Philosophical love poetry

Because Daily Nous asked for it.


Roses are red
And their redness canonic
I’m longing to take you
To realms less platonic



Violets are bled
Roses are rue
Induction’s mysterious
But long live ye and mou


Roses are red,
And I’d read rose reviews
But I viewed your red posy
And knew something new


Roses are red
Your cat is now too
Signaling’s hard
And so are you?


Roses are red,
Romance is too,
But I’ll never know
What red’s like for you


Selective social policing

Suppose you really want to punish people who were born on a Tuesday, because you have an irrational loathing of them. Happily you rule the country. But sadly, you only rule it because the voting populace thinks you are kind and just and not at all vindictive and arbitrary. What do you do? One well known solution is to ban stealing IP or jaywalking and then only have the police resources to deal with, er, about a seventh of cases.

I don’t know how often selective policing happens with actual police forces, but my impression is that it is not the main thing going on there. However I sometimes wonder if it is often the main thing going on in amateur social policing. By amateur social policing, I mean for instance a group deciding that Bob is too much of a leach, and shouldn’t be invited to things. Or smaller status fines distributed via private judgmental gossip, such as ‘Eh, I don’t really like Mary. She is always trying to draw attention to herself.’ or ’I can’t believe I dated him. He makes every conflict into an opportunity to talk at length about his own stupid insecurities, and it’s so boring.’

I claim that in many cases if each person enjoyed having Bob around, his apparently being a leach wouldn’t seem like an urgent priority to avoid. And if the speaker got on with Mary, her sometimes attention-seeking behavior wouldn’t be a deal breaker. He might instead feel a little unhappy about the situation on her behalf, and wonder in a friendly way how to stop her embarrassing herself again. And if a woman said she had found a fantastic partner who was a serious candidate as soulmate, and her friend said that she actually knows him, and must warn her: if they ever get in a  conflict, he will talk too much about his insecurities(!), this would seem like a laughably meek warning. Yet it feels like a fine complaint at the end.

I suspect often in such cases, real invisible reasons behind the scenes drive demand for criticism, and then any of the numerous blatantly obvious flaws that a given person has are brought up to satisfy the need.

I sort of believed something like this abstractly—that there are often different standards for different people, for reasons that most people would find unfair. Which at least means policing depends on the apparent crime and also some less legitimate factor. But lately I have wondered if it is almost entirely the latter.

If I am judgmental of someone, there is often a plausible story where I don’t like them for some reason I don’t endorse, and then chose one of their zillion flaws to complain about, because they are at least actually at fault for those. Whereas for people I do like, I’m ok with all zillion flaws, which are after all pretty inconsequential next to the joy I get from whatever nebulous factors actually make me like them. For instance, if I find myself thinking that a person’s personal habits are awful and their intellectual contributions are that magical combination of obvious and completely wrong, I think there is a good chance that something else is up—perhaps they were disrespectful to me once, or someone I was romantically interested in liked them and not me, or they have never smiled at me, or something actually important like that.

It’s not that the flaws aren’t flaws, or that I don’t genuinely disapprove of the flaws. It’s just that my interest in noting them varies a lot based on other factors. Similarly, it’s not that jaywalking isn’t a problem or doesn’t seem like one to those in charge—it’s just that interest in doing something about it is strongly determined by something else.

In law this kind of thing seems problematic. I’m not sure what to think about it in social contexts. Distribution of smiles and party invitations should arguably be allowed to depend on all kinds of unimportant factors that the distributer cares about. Such as prettiness of nose, or probable unattractiveness to crush also present at party. So the first-order effect of the indirection in social cases is to let the social-benefit distributor avoid discussing their silly-but-legitimate reasons, while in the legal case it is often to actually allow punishments to depend silly-and-illegitimate reasons.

One reason I suspect this is that I think people often talk as if they would have thought someone was great, but then they learned that the person has a flaw. Truly shocking news! When I think in the abstract most people would correctly answer a multiple choice question about whether their friends have: a) zero flaws, b) a small number of flaws, or c) too many flaws to count. However I can’t actually remember cases of this, so maybe I made it up.

Effective diversity

1. Two diversity problems

Here are two concerns sometimes raised in Effective Altruist circles:

  1. Effective Altruists are not very diverse—they are disproportionately male, white, technically minded, technically employed, located in a small number of rich places, young, smart, educated, idealistic, inexperienced. Furthermore, because of this lack of diversity, the community as a whole will fail to know about many problems in the world. For instance, problems that are mostly salient if you have spent many years in the world outside of college, if you live in India, if you work in manufacturing, if you have normal human attitudes and norms.
  2. When new people join the Effective Altruism community and want to dedicate a lot of their efforts to effectively doing good, there is not a streamlined process to help them move from any of a wide range of previous activities to an especially effectively altruistic project, even if they really want to. And it gets harder the further away the person begins from the common EA backgrounds: a young San Franciscan math PhD working in tech and playing on the internet can move into research or advocacy or a new startup more easily than an experienced high school principal with a family in Germany can, plus it’s not even clear that the usual activities are a good use of that person’s skills. So less good is done, and furthermore less respect is accorded to such people than they arguably deserve, and so more forbearance is required on their part to stick around (possibly leading to problem 1).

2. Two divergent evaluations of diversity

These concerns are kind of opposite. Which suggests that if they ran into each other they might explode and disappear, at least a bit.

The first concern is based on a picture where people doing different things from the rest of the EA community are an extremely valuable asset, worthy of being a priority to pursue (with effort that could go to figuring out how to stop AI, or paying for malaria nets, or pursuing collaborators from easy-to-pursue demographics).

The second concern is based on a picture where people who are doing different things from the rest of the EA community are worthy mostly as labor that might be redirected to doing something similar to the rest of us. Which is to say that on this picture, the fact that they are doing different things from the rest of us is an active downside.

There are more nuanced pictures that have both issues at once. For instance, maybe it is important to get people with different backgrounds, but not important that they remain doing different things. Maybe because different backgrounds afford different useful knowledge, but this happens fairly quickly, so nearly everything you would learn from spending ten years in the military, you have learned after the first.

I’m not sure if that is what anyone has in mind. I’m also not sure how often the same people hold both of the above concerns. But I do think it doesn’t matter. If some people were worried that EA didn’t have enough apples for some of its projects and some had the concern that it was too hard to turn apples into something useful like oranges, I feel like there should be some way for the former group to end up at least using the apples that the latter group is having trouble making good use of. And similarly for people with a wide range of backgrounds.

On this story, if you come to EA from a far away walk of life, before trying to change what you are doing to be more similar to what other EAs are doing, you might do well to help with with whatever concern (1) is asking for. (And if you aren’t moved by that concern yourself, you might still cooperate with those who do expect value there yet sadly find themselves to be yet more garden variety utilitarian-leaning Soylent-eating twenty-something programmers, who can perhaps give some more of their earnings on your behalf.)

3. A practical suggestion

But what can one do as a (locally) unusual person to help with concern (1) effectively?

I don’t know about effectively, but here’s at least one cheap suggestion to begin with (competing proposals welcome in the comments):

Choose a part of the world that you are especially familiar with relative to other EAs, and tell the rest of us about the ways it might be interesting.
It can be a literal place, an industry, a community, a social scene, a type of endeavor, a kind of problem you have faced, etc.

Here are a bunch of prompts about what I think might be interesting:

  1. What major concerns do people in that place tend to have that EAs might not be familiar with?
    What would they say if you asked what was bad, what was stupid that it hadn’t been solved, what was a gross injustice, what would they do if they had a million dollars?
  2. What major inefficiencies and wrongs do you see in that place?
    What would you do differently there if you were in charge, or designing it from scratch? What is annoying or ridiculous?
  3. Pick a problem that seems bad and tractable. Roughly how bad do you think it is? 
    Maybe do a back of the envelope calculation, especially if you are new to all this and want EAing practice.
  4. Are there maybe-good things to do that aren’t being done? How hard do you think it would be?
    If you can think of something for the problem(s) in Q4, perhaps estimate how efficiently the problem could be solved, on your rough account.
  5. What might be surprising about that part of the world, to those who haven’t spent time there?
  6. How does the official story relate to reality?
    The ‘official story’ might be what you could write in a children’s book or describe in a polite speech. Is it about right, or do things diverge from it? How is reality different? Are the things driving decisions things people can openly talk about?
  7. Are there important concepts, insights, ways of looking at the world, that are common in this place, that you think many EAs don’t know about?
    What is the most useful jargon? 
  8. What unused opportunities exist in the place?
    Who could really use to find this place? What value is being wasted?
  9. What is just roughly going on in the place?
    What are people trying to do? What are the obstacles? Who are the people? What would someone immediately notice?
  10. What are the big differences between there and where most EAs are?
    What do you notice most moving between them?
  11. What do the people in that place do if they want to change things?
    I hear some people do things other than writing blog posts about them, but I’m not super confident, skip this one it if it is confusing.
  12. If the EA movement had been founded in that place, what would it be doing differently?

(If you write something like this, and aren’t sure where to put it or don’t like to publicly post things, ask me.)

The only evidence I have that this would be good is my own intuition and the considerations mentioned above. I expect it to be quick though, and I for one would be interested to read the answers. I hope to post something like this myself later.

Appendix: Are diverse backgrounds actually pragmatically useful in this particular way? (Some miscellaneous thoughts, no great answers)

Diversity is good for many reasons. One might wonder if it is popular for the same range of reasons as usual within EA, and is just often justified on pragmatic EA grounds, because those kinds of grounds are more memetically fertile here.

One way to investigate this is to ask whether diversity has so far brought in good ideas for new things to do. Another is whether the things we currently do seem to be constrained to be close to home. I actually don’t see either of these being big (though could very easily be missing things), but I still think there is probably value to be had in this vicinity.

4.a. Does EA disproportionately think about EA-demographics relevant causes?

I don’t have a lot to say on the first question, but I’ll address the second a bit. The causes EAs think the most about seem to be trivially preventable diseases affecting poor people on the other side of the world, the far future and weird alien minds that might turn it into an unimaginable hellscape, and what it is like to be various different species.

These things seem ‘close to home’ for those people who find themselves mentally inclined to think about things very far from home, and to apply consistent reasoning to them. I feel like calling this a bias is like saying ‘you just found this apparently great investment opportunity because you are the kind of person who is willing to consider lots of different investment opportunities’. This seems like just a mark in our favor.

So while we may still be missing things that would seem even bigger but we just do not know about for demographic reasons, I think it’s not as bad as it might first seem.

My own guess is that EAs miss a lot of small opportunities for efficient value that are only available to people who intimately know a variety of areas, but actually getting to know those areas would be too expensive for the opportunities resulting to remain cheap.

4.b. Is EA influenced in other ways by arbitrary background things?

On the other hand, I think the ways we try to improve the world probably are influenced a lot by our backgrounds. We treat charity as the default way to improve matters for instance. In some sense, giving money to the person best situated to do the thing you want is pretty natural. But in practice charities are a certain kind of institution often, and giving money to people to do things has serious inefficiencies especially when the whole point is that you have unusual values that you want to fulfill, or you think that other people are failing to be efficient in a whole area of life where you want things to happen. And if charity seems as natural to you as to me, is probably because you are familiar with econ 101 or something, and if you had grown up in a very politically-minded climate the most natural thing to do would be to try to influence politics.

I also think our beliefs and assumptions are probably influenced by our backgrounds. For many such influences, I think they are straightforwardly better. For instance, being well-educated, having thought about ethics unusually much, being taught that it is epistemically reasonable to think about stuff on one’s own, and having read a lot of LessWrong just seem good. There are other things that seem more random. For instance, until I came to The Bay everyone around me seemed to think we were headed for environmental catastrophe (unless technology saved us, or destroyed us first), and now everyone around seems to think technology is going to save us or destroy us (unless environmental catastrophe destroys us first), and while these views are nominally the same, one group is spending all of their time trying to spread the word about the impeding environmental catastrophe, while the other adds ‘assuming no catastrophes’ to the end of their questions about superintelligences. And while I think there are probably good cases that can be made about which of these things to worry about, I am not sure that I have seen them made, and my guess is that many people in both groups are trusting those around them, and would have trusted those around them if they had stumbled across the other group and got on well with them socially.

4.c. To what extent could EA be fruitfully influenced by more things?

So far I have talked about behaviors that are more common among other groups of people: causes to forward, interventions to default to, beliefs and assumptions to hold. I could also have talked about customs and attitudes and institutions. This is all stuff we could copy from other people, if we were familiar enough with other people to know what they do and what is worth copying. But there is also value in knowing what is up with other people without copying them. Like, how things fail and how organizations lose direction and what sources of information can be trusted in what ways and which things tend to end up nobody’s job, and which patterns appear across all endeavors for all time. And arguably an inside perspective is more informative than an outside one. For instance, various observations about medicine seem like good clues for one’s overall worldview, though perhaps most of the value there is from looking in detail rather than from the inside. (Whether having a correct worldview effectively contributes to doing good shall remain a question for another time).