Philosophical love poetry

Because Daily Nous asked for it.


Roses are red
And their redness canonic
I’m longing to take you
To realms less platonic



Violets are bled
Roses are rue
Induction’s mysterious
But long live ye and mou


Roses are red,
And I’d read rose reviews
But I viewed your red posy
And knew something new


Roses are red
Your cat is now too
Signaling’s hard
And so are you?


Roses are red,
Romance is too,
But I’ll never know
What red’s like for you


Selective social policing

Suppose you really want to punish people who were born on a Tuesday, because you have an irrational loathing of them. Happily you rule the country. But sadly, you only rule it because the voting populace thinks you are kind and just and not at all vindictive and arbitrary. What do you do? One well known solution is to ban stealing IP or jaywalking and then only have the police resources to deal with, er, about a seventh of cases.

I don’t know how often selective policing happens with actual police forces, but my impression is that it is not the main thing going on there. However I sometimes wonder if it is often the main thing going on in amateur social policing. By amateur social policing, I mean for instance a group deciding that Bob is too much of a leach, and shouldn’t be invited to things. Or smaller status fines distributed via private judgmental gossip, such as ‘Eh, I don’t really like Mary. She is always trying to draw attention to herself.’ or ’I can’t believe I dated him. He makes every conflict into an opportunity to talk at length about his own stupid insecurities, and it’s so boring.’

I claim that in many cases if each person enjoyed having Bob around, his apparently being a leach wouldn’t seem like an urgent priority to avoid. And if the speaker got on with Mary, her sometimes attention-seeking behavior wouldn’t be a deal breaker. He might instead feel a little unhappy about the situation on her behalf, and wonder in a friendly way how to stop her embarrassing herself again. And if a woman said she had found a fantastic partner who was a serious candidate as soulmate, and her friend said that she actually knows him, and must warn her: if they ever get in a  conflict, he will talk too much about his insecurities(!), this would seem like a laughably meek warning. Yet it feels like a fine complaint at the end.

I suspect often in such cases, real invisible reasons behind the scenes drive demand for criticism, and then any of the numerous blatantly obvious flaws that a given person has are brought up to satisfy the need.

I sort of believed something like this abstractly—that there are often different standards for different people, for reasons that most people would find unfair. Which at least means policing depends on the apparent crime and also some less legitimate factor. But lately I have wondered if it is almost entirely the latter.

If I am judgmental of someone, there is often a plausible story where I don’t like them for some reason I don’t endorse, and then chose one of their zillion flaws to complain about, because they are at least actually at fault for those. Whereas for people I do like, I’m ok with all zillion flaws, which are after all pretty inconsequential next to the joy I get from whatever nebulous factors actually make me like them. For instance, if I find myself thinking that a person’s personal habits are awful and their intellectual contributions are that magical combination of obvious and completely wrong, I think there is a good chance that something else is up—perhaps they were disrespectful to me once, or someone I was romantically interested in liked them and not me, or they have never smiled at me, or something actually important like that.

It’s not that the flaws aren’t flaws, or that I don’t genuinely disapprove of the flaws. It’s just that my interest in noting them varies a lot based on other factors. Similarly, it’s not that jaywalking isn’t a problem or doesn’t seem like one to those in charge—it’s just that interest in doing something about it is strongly determined by something else.

In law this kind of thing seems problematic. I’m not sure what to think about it in social contexts. Distribution of smiles and party invitations should arguably be allowed to depend on all kinds of unimportant factors that the distributer cares about. Such as prettiness of nose, or probable unattractiveness to crush also present at party. So the first-order effect of the indirection in social cases is to let the social-benefit distributor avoid discussing their silly-but-legitimate reasons, while in the legal case it is often to actually allow punishments to depend silly-and-illegitimate reasons.

One reason I suspect this is that I think people often talk as if they would have thought someone was great, but then they learned that the person has a flaw. Truly shocking news! When I think in the abstract most people would correctly answer a multiple choice question about whether their friends have: a) zero flaws, b) a small number of flaws, or c) too many flaws to count. However I can’t actually remember cases of this, so maybe I made it up.

Effective diversity

1. Two diversity problems

Here are two concerns sometimes raised in Effective Altruist circles:

  1. Effective Altruists are not very diverse—they are disproportionately male, white, technically minded, technically employed, located in a small number of rich places, young, smart, educated, idealistic, inexperienced. Furthermore, because of this lack of diversity, the community as a whole will fail to know about many problems in the world. For instance, problems that are mostly salient if you have spent many years in the world outside of college, if you live in India, if you work in manufacturing, if you have normal human attitudes and norms.
  2. When new people join the Effective Altruism community and want to dedicate a lot of their efforts to effectively doing good, there is not a streamlined process to help them move from any of a wide range of previous activities to an especially effectively altruistic project, even if they really want to. And it gets harder the further away the person begins from the common EA backgrounds: a young San Franciscan math PhD working in tech and playing on the internet can move into research or advocacy or a new startup more easily than an experienced high school principal with a family in Germany can, plus it’s not even clear that the usual activities are a good use of that person’s skills. So less good is done, and furthermore less respect is accorded to such people than they arguably deserve, and so more forbearance is required on their part to stick around (possibly leading to problem 1).

2. Two divergent evaluations of diversity

These concerns are kind of opposite. Which suggests that if they ran into each other they might explode and disappear, at least a bit.

The first concern is based on a picture where people doing different things from the rest of the EA community are an extremely valuable asset, worthy of being a priority to pursue (with effort that could go to figuring out how to stop AI, or paying for malaria nets, or pursuing collaborators from easy-to-pursue demographics).

The second concern is based on a picture where people who are doing different things from the rest of the EA community are worthy mostly as labor that might be redirected to doing something similar to the rest of us. Which is to say that on this picture, the fact that they are doing different things from the rest of us is an active downside.

There are more nuanced pictures that have both issues at once. For instance, maybe it is important to get people with different backgrounds, but not important that they remain doing different things. Maybe because different backgrounds afford different useful knowledge, but this happens fairly quickly, so nearly everything you would learn from spending ten years in the military, you have learned after the first.

I’m not sure if that is what anyone has in mind. I’m also not sure how often the same people hold both of the above concerns. But I do think it doesn’t matter. If some people were worried that EA didn’t have enough apples for some of its projects and some had the concern that it was too hard to turn apples into something useful like oranges, I feel like there should be some way for the former group to end up at least using the apples that the latter group is having trouble making good use of. And similarly for people with a wide range of backgrounds.

On this story, if you come to EA from a far away walk of life, before trying to change what you are doing to be more similar to what other EAs are doing, you might do well to help with with whatever concern (1) is asking for. (And if you aren’t moved by that concern yourself, you might still cooperate with those who do expect value there yet sadly find themselves to be yet more garden variety utilitarian-leaning Soylent-eating twenty-something programmers, who can perhaps give some more of their earnings on your behalf.)

3. A practical suggestion

But what can one do as a (locally) unusual person to help with concern (1) effectively?

I don’t know about effectively, but here’s at least one cheap suggestion to begin with (competing proposals welcome in the comments):

Choose a part of the world that you are especially familiar with relative to other EAs, and tell the rest of us about the ways it might be interesting.
It can be a literal place, an industry, a community, a social scene, a type of endeavor, a kind of problem you have faced, etc.

Here are a bunch of prompts about what I think might be interesting:

  1. What major concerns do people in that place tend to have that EAs might not be familiar with?
    What would they say if you asked what was bad, what was stupid that it hadn’t been solved, what was a gross injustice, what would they do if they had a million dollars?
  2. What major inefficiencies and wrongs do you see in that place?
    What would you do differently there if you were in charge, or designing it from scratch? What is annoying or ridiculous?
  3. Pick a problem that seems bad and tractable. Roughly how bad do you think it is? 
    Maybe do a back of the envelope calculation, especially if you are new to all this and want EAing practice.
  4. Are there maybe-good things to do that aren’t being done? How hard do you think it would be?
    If you can think of something for the problem(s) in Q4, perhaps estimate how efficiently the problem could be solved, on your rough account.
  5. What might be surprising about that part of the world, to those who haven’t spent time there?
  6. How does the official story relate to reality?
    The ‘official story’ might be what you could write in a children’s book or describe in a polite speech. Is it about right, or do things diverge from it? How is reality different? Are the things driving decisions things people can openly talk about?
  7. Are there important concepts, insights, ways of looking at the world, that are common in this place, that you think many EAs don’t know about?
    What is the most useful jargon? 
  8. What unused opportunities exist in the place?
    Who could really use to find this place? What value is being wasted?
  9. What is just roughly going on in the place?
    What are people trying to do? What are the obstacles? Who are the people? What would someone immediately notice?
  10. What are the big differences between there and where most EAs are?
    What do you notice most moving between them?
  11. What do the people in that place do if they want to change things?
    I hear some people do things other than writing blog posts about them, but I’m not super confident, skip this one it if it is confusing.
  12. If the EA movement had been founded in that place, what would it be doing differently?

(If you write something like this, and aren’t sure where to put it or don’t like to publicly post things, ask me.)

The only evidence I have that this would be good is my own intuition and the considerations mentioned above. I expect it to be quick though, and I for one would be interested to read the answers. I hope to post something like this myself later.

Appendix: Are diverse backgrounds actually pragmatically useful in this particular way? (Some miscellaneous thoughts, no great answers)

Diversity is good for many reasons. One might wonder if it is popular for the same range of reasons as usual within EA, and is just often justified on pragmatic EA grounds, because those kinds of grounds are more memetically fertile here.

One way to investigate this is to ask whether diversity has so far brought in good ideas for new things to do. Another is whether the things we currently do seem to be constrained to be close to home. I actually don’t see either of these being big (though could very easily be missing things), but I still think there is probably value to be had in this vicinity.

4.a. Does EA disproportionately think about EA-demographics relevant causes?

I don’t have a lot to say on the first question, but I’ll address the second a bit. The causes EAs think the most about seem to be trivially preventable diseases affecting poor people on the other side of the world, the far future and weird alien minds that might turn it into an unimaginable hellscape, and what it is like to be various different species.

These things seem ‘close to home’ for those people who find themselves mentally inclined to think about things very far from home, and to apply consistent reasoning to them. I feel like calling this a bias is like saying ‘you just found this apparently great investment opportunity because you are the kind of person who is willing to consider lots of different investment opportunities’. This seems like just a mark in our favor.

So while we may still be missing things that would seem even bigger but we just do not know about for demographic reasons, I think it’s not as bad as it might first seem.

My own guess is that EAs miss a lot of small opportunities for efficient value that are only available to people who intimately know a variety of areas, but actually getting to know those areas would be too expensive for the opportunities resulting to remain cheap.

4.b. Is EA influenced in other ways by arbitrary background things?

On the other hand, I think the ways we try to improve the world probably are influenced a lot by our backgrounds. We treat charity as the default way to improve matters for instance. In some sense, giving money to the person best situated to do the thing you want is pretty natural. But in practice charities are a certain kind of institution often, and giving money to people to do things has serious inefficiencies especially when the whole point is that you have unusual values that you want to fulfill, or you think that other people are failing to be efficient in a whole area of life where you want things to happen. And if charity seems as natural to you as to me, is probably because you are familiar with econ 101 or something, and if you had grown up in a very politically-minded climate the most natural thing to do would be to try to influence politics.

I also think our beliefs and assumptions are probably influenced by our backgrounds. For many such influences, I think they are straightforwardly better. For instance, being well-educated, having thought about ethics unusually much, being taught that it is epistemically reasonable to think about stuff on one’s own, and having read a lot of LessWrong just seem good. There are other things that seem more random. For instance, until I came to The Bay everyone around me seemed to think we were headed for environmental catastrophe (unless technology saved us, or destroyed us first), and now everyone around seems to think technology is going to save us or destroy us (unless environmental catastrophe destroys us first), and while these views are nominally the same, one group is spending all of their time trying to spread the word about the impeding environmental catastrophe, while the other adds ‘assuming no catastrophes’ to the end of their questions about superintelligences. And while I think there are probably good cases that can be made about which of these things to worry about, I am not sure that I have seen them made, and my guess is that many people in both groups are trusting those around them, and would have trusted those around them if they had stumbled across the other group and got on well with them socially.

4.c. To what extent could EA be fruitfully influenced by more things?

So far I have talked about behaviors that are more common among other groups of people: causes to forward, interventions to default to, beliefs and assumptions to hold. I could also have talked about customs and attitudes and institutions. This is all stuff we could copy from other people, if we were familiar enough with other people to know what they do and what is worth copying. But there is also value in knowing what is up with other people without copying them. Like, how things fail and how organizations lose direction and what sources of information can be trusted in what ways and which things tend to end up nobody’s job, and which patterns appear across all endeavors for all time. And arguably an inside perspective is more informative than an outside one. For instance, various observations about medicine seem like good clues for one’s overall worldview, though perhaps most of the value there is from looking in detail rather than from the inside. (Whether having a correct worldview effectively contributes to doing good shall remain a question for another time).

Tragedy of the free time commons

Procrastination often seems a bit like an internal tragedy of the commons.

In a tragedy of the commons, a group of people share something nice, like a shared pasture on which to raise their cows. If they together refrained from overusing it they would all benefit (e.g. because it doesn’t become a mud pit), but if any one person alone refrains (e.g. by having a smaller herd of cows), they expect to see little of the benefit themselves, and they probably expect someone else to use more resources (e.g. by adding an additional cow to another herd), so that there isn’t even a shared benefit to the group from the person’s selfless action.

Suppose you have a paper due on Friday, and it is going to take 60 minutes to finish it. Think of yourself over the preceding day as 960 you-minutes. Each you-minute would much prefer the paper be done than not done the following morning, but would somewhat prefer to not work on it themselves. Because these time-slices make their decisions about whether to work one after another, and know what decisions were made in the past, the final N you-minutes in the day will definitely work, if there are N minutes of paper left to write. This means for you-minutes where there are fewer minutes of paper left to write than you-minutes left who might write them, working doesn’t help—it just relieves a later you-minute of working which would otherwise be forced to. And that is certainly worse for the you-minute deciding. So the 60 minutes of writing is done in the last 60 minutes. Which doesn’t destroy any value in this model, so all is good.

But let’s make it more realistic. Suppose that there are better and worse times to work, and which are which is not known ahead of time. Working during a worse minute either produces less than a minute of work, or incurs other costs to the relevant you-minute (e.g. extra suffering). Then instead of everyone doing nothing until exactly sixty minutes before the deadline and then working full time after that, work becomes more worthwhile as you move toward the sixty minute mark, because it becomes increasingly likely that the bad minutes later will not be able to fulfill the work demanded of them. So relatively good you-minutes begin to work sometimes and then less good ones and then close to the deadline even the worst you-minutes work. The you-minutes still mostly work to avoid failing, they don’t mind much if they force bad you-minutes later to work, even if several of them have to work to get a minute’s worth of work done, or even if they endure private suffering as a result. So the early good minutes still don’t work much, and toward midnight many bad minutes work and suffer. This is more of a tragedy of the commons: most minutes freeride, because if they didn’t, someone else would. And all the free-riding causes massive costs.

I think this matches pretty well some elements of procrastination that I see.

This is more complicated than the most straightforward kind of tragedy of the commons—for instance, it involves sequential play—but I don’t know if there is a name this exact kind of game.

Fear or fear?

I wrote recently about how people tend to use the same words—and sometimes concepts—for ‘want’ as in ‘yearn for’ and ‘want’ as in ‘intend’. As in, ‘It’s so lovely here I want to stay forever’, yet ‘I want leave before midnight, because otherwise I will miss my train’. And the trouble this causes.

I think we do something similar with fear. ‘I’m concerned that X’ can mean ‘I feel fear about the possibility of X’ or it can mean ‘I think X (which would be bad) might be true’. I’m not sure which words naturally distinguish these two different messages, but whatever they are, I don’t seem to use them. For instance, what would avoid ambiguity in this sentence? ‘I ……………. that not enough people are going to vote’. I can think of several ways to fill the slot: worry, fear, am concerned, am scared, am frightened, am anxious. But I think they can either be used in both ways, or suggest a more specific kind of feeling.

The ambiguity of these words is especially noticeable if one has unusual levels of anxiety (for instance because of an anxiety disorder, or I suppose because a relaxation disorder). If you try to express a different one to that which people expect, it becomes clear that interpreting such a statement relies on on context. If you are known to usually be anxious, and you say ‘I fear our shoe rack is not large enough for all of our shoes’ you will be misunderstood to mean ‘my heart is pounding and I can’t breathe or think because of this shoe rack’ when you might be more accurately interpreted as, ‘I feel no emotions about the shoe rack, but may I draw your attention to a problem with it?’.

I don’t know if this causes problems, like the ‘want’ case. It is almost the opposite—you are mixing up ‘I have an urge to avoid this thing’ with ‘I judge there to be a problem here’. So you might expect it to go wrong in an analogous way: we feel fear regarding things, and then jump to expensively avoiding them without taking the other stakes into consideration.

It is certainly true that people behave in this way sometimes. For instance, once I was putting a golden necklace on in a dark car, and when I touched my hands to my neck I found a giant hairy spider on it. I jumped to expensively avoiding the spider in every sense, and did not find the necklace again. My mistake was neglecting to take into account the value of the necklace to me alongside my aversion to having a spider on my neck. I think there are more drawn out examples too. However I am not sure I have ever seen someone behave in this way due to confusion in the use of concepts, whereas with ‘want’ I think I have.

I wonder if more generally people often just use the same words for both ‘I have emotion Y about X’ and ‘my considered attitude toward X is the same as the one I might have if I had emotion Y’.  ‘I regret…’, ‘I’m sorry…’, ‘I hope’, and ‘I trust’, seem arguably like this too, but I’m not sure about others.

As I said before, I’m inclined to infer from the fact that people don’t really have good language to distinguish two things that they haven’t historically distinguished the things much. In this case for instance, I suppose that people have mostly treated feeling fear as identical to having the considered position that a thing is risky. This sort of thing would make sense in the design of a creature whose emotions basically track all of the relevant considerations. Or at least as many relevant considerations as any other part of its brain might usefully track. My guess is that we used to be much more like this for various reasons, and now are less so.

In the case of fear, perhaps we used to be in situations where our natural terror regarding aggressive animals for instance directed us well, whereas now we just tend to be too scared of snakes and sharks and not scared enough about heart disease or cars. At the same time, our intellectual faculties have grown into elaborate science and technology that can usefully track things like heart disease and build things like cars.

I have long thought that people often almost accidentally take their feelings to be their considered positions, without having an extra step of considering them. I take this as a bit more evidence, but it is also possible that my earlier theory just made this kind of  observation stand out, and there are lots of observations in the world to observe.