Category Archives: 1

EA as offsetting

Scott made a good post about vegetarianism.

But the overall line of reasoning sounds to me like:

“There’s a pretty good case that one is morally compelled to pay for people in the developing world to have shoes, because it looks pretty clear now that people in the developing world have feet that can benefit a lot from shoes.

However, there is this interesting argument that it is ok to not buy shoes, and offset the failing through donating a small amount to effective charities.”

— which I think many Effective Altruists would consider at least strange and inefficient way of approaching the question of what one should do, though it does arrive at the correct answer. In particular, why take the detour through an obligation to do something that is apparently not as cost-effective as the offsetting activity? (If it were as cost effective, we would not prefer to do the offsetting activity). That it would be better to replace the first activity with the second seems like it should cast doubt on the reasoning that originally suggested the first activity. Assuming cost-effectively doing good is the goal.

That is, perhaps shoes are cost-effective. Perhaps AMF is. One thing is for sure though: it can’t be that shoes are one of the most cost-effective interventions and can also be cost-effectively offset by donating to AMF instead. If you believe that shoes can be offset, this demonstrates that shoes are less cost-effective than the offset, and so of little relevance to Effective Altruists. We should just do the ‘offset’ activity to begin with.

Does the above line of reasoning make more sense in the case of vegetarianism? If so, what is the difference? I have some answers, but I’m curious about which ones matter to others.

 

Social games

An interesting thing about playing board games is that in order to get the most out of the game, you have to pretend that you mostly care about winning the game. And not, for instance, about enjoying yourself, making your friend happy, or being cooperative or trustworthy outside of the game. This is sometimes uncomfortable, and causes real conflict, because everybody does care about things outside of the game, and people vary in how thoroughly they are willing or able to pretend otherwise for the sake of the game.

Sports are similar: to enjoy soccer, you have to pretend for a bit that you want to get the ball into the goals so badly that you are willing to risk being hit in the face by a soccer ball. You might even be playing soccer because you think it will improve your health. Inside the soccer, you just care about scoring goals (I think—I don’t actually know how soccer works, in part because I am unable to suspend my preferences about not being hit by balls).

I think social interaction is often similar. Suppose you go out to lunch with a group of work colleagues. The group chooses an outside cafe, overlooking a river. You talk about what kind of boats you are looking at. You talk about how much it must cost to rent property like this. You talk about what other eating options are nearby, and so on. Do you care about kinds of boats? Probably not. Are you going to go home and look up real estate prices for fun? No. You might care a bit about the quality of eating places nearby, but Yelp will probably inform you better than this group. You are adopting a kind of fake, short term interest in the topics, because you have to be interested in the topics to play properly at the social interaction, and thus to enjoy it.

Sometimes I don’t like these fake goals in socializing. Perhaps because you can’t be explicit about them the way you can about soccer and board games. Or perhaps because the real goal is often to be closer with people, and to understand what they really care about, and pretending together that you care about something else feels like an active obstruction to that. It’s just hard to sit there with a straight face and pretend I care about boats, when really I just want to be closer to the person I’m talking to and am willing to adopt any stance regarding boats to get there. Pretending we both care about boats together without acknowledging it makes me feel less close.

So I thought about ways to improve this. Two I know of:

1. Be explicit about your real goals in socializing. This works best if your real goals are not embarrassing, and if your partner is comfortable with explicitness. It’s not that weird to say, ‘I’d like to know you better. Want to tell me more about why you came here?’ It is that weird to say, ‘I find your body overwhelmingly appealing, and am interested in sleeping with you. I’m happy to talk about literally anything, if you think will increase my odds’.

2. Try to talk about things that you really care about at least as much as the social goals at hand. This means you can talk about the weather with people you don’t care about at all, but if you want to become close friends with someone, you have to talk about what you want to do with your life, or how you will escape prison this evening, or whatever. This isn’t such a bad heuristic on other grounds.

On the other hand, there are reasons that it is common to adopt fake goals in pursuit of your real goals, and it’s not clear that I understand them well enough to abandon them.

Friendship is utilitarian

Suppose you are a perfect utilitarian of some kind. You also live in a world where the utilities associated with your actions are well worked out. You can always spend $1 to save a life, which is worth one million utilons. You can also spend $1 on a cup of tea, which is worth three utilons, and will also make you a little better at working all day, producing $30 of extra value. So tea is a slam dunk. 

However you also have a friend who would like to drink tea, and it will also make her $30 more productive. However your friend is not a utilitarian, but rather some kind of personal leisure satisficer, who will spend her dollars on magazines. You have an opportunity to buy tea for your friend, when she cannot buy it for herself. Do you do so?

Yes. Unless it is a very one-shot friendship, she will likely remember your generosity and buy you tea another time, perhaps when she is the one with the unique tea buying opportunity. So it is roughly as good to buy tea for her as it is to buy tea for yourself. It doesn’t matter whether your friend’s resources are to be ‘wasted’ on goods that you don’t care about, or whether her pleasure in a cup of tea can compare to the value of the human life foregone to buy it. It just matters whether the two of you can cooperate for mutual gain.

I think people often feel that having friends who are worth more than a thousand distant strangers is a departure from some ideal utilitarianism. Or that it is only ok to have such friends if they share similar values, and will spend their own resources in the way you would. I claim highly valued friends are actually a natural consequence of utilitarianism. 

There is a difference in how much you care about your friends terminally versus instrumentally. On utilitarian grounds you would care about them terminally as much as any stranger, and look after them instrumentally so that they look after you. However, caring about each other terminally seems like a pretty good arrangement instrumentally. If two people with different goals, who are cooperating instrumentally, can exchange this situation for one where they both know that they truly share a compromised set of goals, this is a win because they can now trust each other. Also, humans care about being cared about, so trading real caring produces value in this way too, without any real downside. So arguably any real utilitarian would end up really caring about their friends and other allies, much more than they care about people disconnected from them.

Too late for the easy solutions

The other day I took a day off, to do whatever I felt like. One appealing idea early in the day was to solve a major problem that has been bothering me for as long as I remember. I won’t go into details, but it seemed like maybe if I just sat down and tried to solve it, I might.

Then I realized that I didn’t really feel like solving it. If I could just sit down and solve it, then why hadn’t I already solved it? What would I say happened, even to myself? Either I had so far failed to do an easy thing—at great cost— or a legitimately hard problem had been magically and unintelligibly solved. Either I would be a bad person, or reality would not make sense. Neither of these seemed appealing at all. Better to suffer a little longer, and solve the problem some hard way, without blame or confusion.

When I noticed this, I realized it was actually a familiar pattern. It would be disconcerting to just casually decide to not have the problem any more, without some more meaningful banishment ritual.

This relates to forgetting to solve problems because you assume that you have tried. Instead of assuming that you have tried because you are reasonable, you implicitly assume you have tried because you are too scared of learning that you didn’t try or that your trying was somehow wrong.

This is related to the general mistake where a task feels like it is meant to take a long time, so you dutifully spend a long time on it. For instance, you think you might be curious about this philosophical problem, and so you talk to your advisor about your curiosity, and arrange to get some funding to pursue it for a little while, and make a schedule, and think about how to go about thinking about it, and try to go to more conferences on the topic. When, if you had just immediately sat down and tried to answer the question, you might have just answered it. This is something that ‘rationality techniques’ such as setting a five minute timer for solving a problem, or trying to solve a problem assuming it is really easy, are meant to eradicate.

It also reminds me a bit of people who sacrifice a lot for some goal, and then are faced with the prospect of everyone achieving the same ends without sacrifice, and are reluctant to accept that fate.

I’m not sure what to do about this, but for now I’m trying out just not making this error, without any fancy debugging.

One misuse of the efficient Katja hypothesis

Sometimes I have bad, longstanding problems. Sometimes other people know about them, but probably assume I have done everything I can to solve them, because I’m a reasonable person. So they don’t try to help me to solve them, because the problems are presumably incredibly intractable. They especially don’t try to help me solve them by checking I’ve tried easy things, because that would be rude. Reasonable people don’t go around having terrible problems that are easy to solve. 

Unfortunately, sometimes I do the same thing. I know I have some substantial problem, and since it has been there forever, I assume I’ve tried pretty hard to solve it, because I’m a reasonable person. I don’t explicitly think this, or the argument would look suspicious to me. But if there’s just some terrible thing that is a permanent fixture of my life, presumably that’s because it’s pretty damn hard to fix. 

Sometimes though, past-Katja was assuming something similar. Sometimes a problem has been around forever from the first time a person even notices it. You won’t necessarily notice yourself truly noticing it for the first time, and then rapidly take concrete actions to resolve it. It is already familiar. Or perhaps you have had it since before you were reasonable at all, but at every point you implicitly assumed it was beyond your capabilities, since apparently it was yesterday. 

At any rate, sometimes I end up with longstanding problems that I have never actually tried that hard to solve. And unless I actually stare at them and ask myself what it was I did to try to solve them, I don’t notice. This is similar to the error I labeled ‘I like therefore I am’, in which approving of a virtue is mistaken for having it. Here disapproving of a problem is mistaken for fighting it. I am not sure if this is a pattern for other people. If so, I suggest actually trying to solve it.