Commitments and affordances

[Epistemic status: a thing I’ve been thinking about, and may be more ignorant about than approximately everyone else.]

I don’t seem to be an expert on time management right now. But piecing through the crufty wreckage of my plans and lists and calendars, I do have a certain detailed viewpoint on particular impediments to time management. You probably shouldn’t trust my understanding here, but let me make some observations.

Sometimes, I notice that I wish I did more of some kind of activity. For instance, I recently noticed that I did less yoga than I wanted—namely, none ever. I often notice that my room is less tidy than I prefer. Certain lines of inquiry strike me as neglected and fertile and I have an urge to pursue them. This sort of thing happens to me many times per day, which I think is normal for a human.

The most natural and common response to this is to say (to oneself or to a companion) something like, ‘I should really do some yoga’, and then be done with it. This has the virtue of being extremely cheap, and providing substantial satisfaction. But sophisticated non-hypocritical materialists like myself know that it is better to take some sort of action, right now, to actually cause more yoga in the future. For instance, one could make a note in one’s todo list to look into yoga, or better yet, put a plan into one’s reliable calendar system.

Once you have noticed that merely saying ‘I should really do some yoga’ has little consequence, this seems quite wondrous—a set of rituals that can actually cause your future self to do a thing! What power. Yet somehow, life does not become as excellent as one might think as a result. It instead becomes a constant stream of going to yoga classes that you don’t feel like.

One kind of problem seems to come from drawing conclusions that are too strong from the feeling of wanting to do a thing. For instance, hearing your brain say, ‘Ooh babies! I want babies!’ at a baby, and assuming that means you want babies, and should immediately stop your birth control. This is especially a problem if the part of your brain that wants things (without regard to trade-offs) also follows up with instructions on how to get them. “Oh man, I really love drawing with oil pastels…I should get some…I could set up a little studio in my basement, and enter contests…I should start by buying some pastels on the way home, from that art shop near my house”. I have noticed this before, and now more often think “Oh man, I really love drawing with oil pastels…but probably not enough that it’s worth doing…I’ll put it next to having babies and starting a startup in the pile of nice things I could do if I didn’t have even better things to do”

Another kind of problem, which is what I’m actually trying to write about, is that after establishing that a thing would actually be great to do, it can be very natural to make a commitment to doing it. For instance, because I wanted to do some yoga, I signed up to a yoga class, and put a repeating event in my calendar. Similarly, if I want to see a person, often I will make an appointment to get lunch with them or something, which I am then committed to. Commitments often go badly. The whole idea of being committed is that you will do the thing regardless of your feelings about it at the time. Which has costs—many things are just much worse if you don’t feel like doing them, either because you need to feel like doing them to do them well, or because not feeling like doing them is information about their value, or because doing a thing you don’t feel like is unpleasant in itself.

There are of course upsides to committing—for instance it allows everyone to coordinate their plans, and doing a thing once may be more valuable if you have a strong expectation that you will do it another ten times. I think the error I make is just defaulting to commitments without much concern for whether commitments are appropriate to the situation. My impression is that other people also do this.

If I now want to do some yoga in the future, and I don’t want to commit myself to it, how else can I increase the chance of it happening? The options for influencing my future self’s behavior seem pretty much like those for influencing other people’s behavior (I actually rarely commit other people to doing things against their will). Here are some:

  • cause my future self to notice that yoga is an option
  • let her know about the virtues of doing yoga
  • make yoga salient
  • add further incentives to doing yoga
  • make it easy to do yoga

If I know the virtues of doing yoga, usually my future self will automatically know about them too, so that one isn’t widely applicable. Incentivising doing yoga might be good sometimes, but it sort of suggests that my natural incentives are substantially misaligned with my future self on this, and if that is so, it seems like there are deeper problems e.g. around very high discount rates, that perhaps I should sort out. That is, I’d like it if yoga didn’t just seem appealing because the costs are tomorrow, and I don’t care about tomorrow. Nonetheless, to some extent this is why yoga is appealing, and incentives can help align interests (especially if present me pays for the incentives, instead of stealing from some other future self).

The remaining options—make yoga known, salient, and easy—might be summarised as causing my future self to have an affordance for yoga. They might collectively be achieved by going to yoga once, so that I know where it is and what it involves, and have already paid some of the logistical costs. Also, I will gain a concrete sense of what yoga does that can pop up if I want that kind of thing. My guess is that I should do this kind of thing more often, and that I mostly don’t because I don’t have so much of an affordance for it as I do for making commitments. I haven’t actually tried this a lot however, so I’m not sure how often replacing commitments with affordances is good. It does seems likely good to at least notice that there are often alternatives to commitments, for when you are trying to have a causal influence on your future behavior.

One place I have tried this more is in social engagements. Replacing commitments with affordances is part of the motivation for things like the Berkeley Schelling Point (a regular time and cafe at which people can go if they want to hang out), a breakfast club that I’m part of, and my ‘casual social calendar’, in which I write things I’m doing anyway for which I’d be happy for company (e.g. going to the gym) so that my friends can join me if they feel like it. These have varying levels of overall success, but I think they are all better than higher commitment versions of them would be.

Thanks to Ben Hoffman for conversation that inspired this post.

How do we know our own desires?

Sometimes I find myself longing for something, with little idea what it is.

This suggests that perceiving desire and perceiving which thing it is that is desired by the desire are separable mental actions.

In this state, I make guesses as to what I want. Am I thirsty? (I consider drinking some water and see if that feels appealing.) Do I want to have sex? (A brief fantasy informs me that sex would be good, but is not what I crave.) Do I want social comfort? (I open Facebook, maybe that has social comfort I could test with…)

If I do infer the desire in this way, I am still not directly reading it from my own mind. I am making educated guesses and testing them using my mind’s behavior.

Other times, it seems like I immediately know my own desires. When that happens, am I really receiving them introspectively, or am I merely playing the same inference game more insightfully?

We usually suppose that people are correct about their own immediate desires. They may be wrong about whether they want cookie A or cookie B, because they are misinformed about which one is delicious. But if they think they want to eat something delicious, we trust them on that.

On the model where we are mostly  inferring our desires from more general feelings of wanting, we might expect people are wrong about their desires fairly often.

EA as offsetting

Scott made a good post about vegetarianism.

But the overall line of reasoning sounds to me like:

“There’s a pretty good case that one is morally compelled to pay for people in the developing world to have shoes, because it looks pretty clear now that people in the developing world have feet that can benefit a lot from shoes.

However, there is this interesting argument that it is ok to not buy shoes, and offset the failing through donating a small amount to effective charities.”

— which I think many Effective Altruists would consider at least strange and inefficient way of approaching the question of what one should do, though it does arrive at the correct answer. In particular, why take the detour through an obligation to do something that is apparently not as cost-effective as the offsetting activity? (If it were as cost effective, we would not prefer to do the offsetting activity). That it would be better to replace the first activity with the second seems like it should cast doubt on the reasoning that originally suggested the first activity. Assuming cost-effectively doing good is the goal.

That is, perhaps shoes are cost-effective. Perhaps AMF is. One thing is for sure though: it can’t be that shoes are one of the most cost-effective interventions and can also be cost-effectively offset by donating to AMF instead. If you believe that shoes can be offset, this demonstrates that shoes are less cost-effective than the offset, and so of little relevance to Effective Altruists. We should just do the ‘offset’ activity to begin with.

Does the above line of reasoning make more sense in the case of vegetarianism? If so, what is the difference? I have some answers, but I’m curious about which ones matter to others.


Social games

An interesting thing about playing board games is that in order to get the most out of the game, you have to pretend that you mostly care about winning the game. And not, for instance, about enjoying yourself, making your friend happy, or being cooperative or trustworthy outside of the game. This is sometimes uncomfortable, and causes real conflict, because everybody does care about things outside of the game, and people vary in how thoroughly they are willing or able to pretend otherwise for the sake of the game.

Sports are similar: to enjoy soccer, you have to pretend for a bit that you want to get the ball into the goals so badly that you are willing to risk being hit in the face by a soccer ball. You might even be playing soccer because you think it will improve your health. Inside the soccer, you just care about scoring goals (I think—I don’t actually know how soccer works, in part because I am unable to suspend my preferences about not being hit by balls).

I think social interaction is often similar. Suppose you go out to lunch with a group of work colleagues. The group chooses an outside cafe, overlooking a river. You talk about what kind of boats you are looking at. You talk about how much it must cost to rent property like this. You talk about what other eating options are nearby, and so on. Do you care about kinds of boats? Probably not. Are you going to go home and look up real estate prices for fun? No. You might care a bit about the quality of eating places nearby, but Yelp will probably inform you better than this group. You are adopting a kind of fake, short term interest in the topics, because you have to be interested in the topics to play properly at the social interaction, and thus to enjoy it.

Sometimes I don’t like these fake goals in socializing. Perhaps because you can’t be explicit about them the way you can about soccer and board games. Or perhaps because the real goal is often to be closer with people, and to understand what they really care about, and pretending together that you care about something else feels like an active obstruction to that. It’s just hard to sit there with a straight face and pretend I care about boats, when really I just want to be closer to the person I’m talking to and am willing to adopt any stance regarding boats to get there. Pretending we both care about boats together without acknowledging it makes me feel less close.

So I thought about ways to improve this. Two I know of:

1. Be explicit about your real goals in socializing. This works best if your real goals are not embarrassing, and if your partner is comfortable with explicitness. It’s not that weird to say, ‘I’d like to know you better. Want to tell me more about why you came here?’ It is that weird to say, ‘I find your body overwhelmingly appealing, and am interested in sleeping with you. I’m happy to talk about literally anything, if you think will increase my odds’.

2. Try to talk about things that you really care about at least as much as the social goals at hand. This means you can talk about the weather with people you don’t care about at all, but if you want to become close friends with someone, you have to talk about what you want to do with your life, or how you will escape prison this evening, or whatever. This isn’t such a bad heuristic on other grounds.

On the other hand, there are reasons that it is common to adopt fake goals in pursuit of your real goals, and it’s not clear that I understand them well enough to abandon them.

Friendship is utilitarian

Suppose you are a perfect utilitarian of some kind. You also live in a world where the utilities associated with your actions are well worked out. You can always spend $1 to save a life, which is worth one million utilons. You can also spend $1 on a cup of tea, which is worth three utilons, and will also make you a little better at working all day, producing $30 of extra value. So tea is a slam dunk. 

However you also have a friend who would like to drink tea, and it will also make her $30 more productive. However your friend is not a utilitarian, but rather some kind of personal leisure satisficer, who will spend her dollars on magazines. You have an opportunity to buy tea for your friend, when she cannot buy it for herself. Do you do so?

Yes. Unless it is a very one-shot friendship, she will likely remember your generosity and buy you tea another time, perhaps when she is the one with the unique tea buying opportunity. So it is roughly as good to buy tea for her as it is to buy tea for yourself. It doesn’t matter whether your friend’s resources are to be ‘wasted’ on goods that you don’t care about, or whether her pleasure in a cup of tea can compare to the value of the human life foregone to buy it. It just matters whether the two of you can cooperate for mutual gain.

I think people often feel that having friends who are worth more than a thousand distant strangers is a departure from some ideal utilitarianism. Or that it is only ok to have such friends if they share similar values, and will spend their own resources in the way you would. I claim highly valued friends are actually a natural consequence of utilitarianism. 

There is a difference in how much you care about your friends terminally versus instrumentally. On utilitarian grounds you would care about them terminally as much as any stranger, and look after them instrumentally so that they look after you. However, caring about each other terminally seems like a pretty good arrangement instrumentally. If two people with different goals, who are cooperating instrumentally, can exchange this situation for one where they both know that they truly share a compromised set of goals, this is a win because they can now trust each other. Also, humans care about being cared about, so trading real caring produces value in this way too, without any real downside. So arguably any real utilitarian would end up really caring about their friends and other allies, much more than they care about people disconnected from them.