On liking things about crushes

Sometimes I have had crushes on people, and then all kinds of miscellaneous characteristics they had seemed good. Not just their face or their sense of style or the exact way they pronounce my name. But also things that would usually be considered unattractive. For instance, if they are balding, I might suddenly find myself excited by sparse head stubble, when I had previously liked luxuriant hair. And then subsequently I would be more attracted to every other balding guy I met.

I think this is not just directly because the person having those characteristics makes the characteristics by association the most excellent characteristics a person could have. Though that is maybe part of it (your face reminds me of…you!)

I think it is also because I implicitly infer that the person in question likes those characteristics, and I expect people to like me more if I like the things they like. For instance, if they are grumpy and have crumpled clothes, I think I implicitly infer that they like people being grumpy and wearing crumpled clothes, and that if I favor those things too, it will help us be friends. And I can appreciate a pretty wide range of things, so I implicitly give attention to the ones that are helpful.

So I suppose that I must implicitly believe everyone likes almost all of their characteristics. Explicitly, I think this is unlikely to be true. Though I do expect people relate more to people who share their characteristics, whether or not they like the characteristics. So maybe that is what I’m implicitly going for.

All this leads me to think that that my brain is probably doing a milder version of the thing it does with crushes with respect to other people who I like in less extreme ways all the time. “Ooh—I guess you like being mildly irritated! I can do that too! Grr. Do you like me?” It is just only so strong as to be introspectively perceptible in the case of crushes. Which I guess matches the observation that people copy each other a lot.

I have long had the abstract impression that I should choose who I spend much time with carefully because company makes an alarmingly large difference to one’s own behavior. But the way that my brain updates on crushes makes that concern feel more viscerally real to me. Happily (not coincidentally) current company seems pretty good. Though unusual, so probably I don’t give things like religiosity and being athletic proper thought. These concerns are is not news, but a new angle from which to feel like it is actually a real problem and not just one of those problems that it would be virtuous to be troubled by.

Hiding misinformation in meanings

I

It is hard to spread misinformation, because information spreads too, and they eventually run into each other and explode.

If a person wants to lie then, they can be better off to make words correspond to different things for different people, so that even when people hear the information it sounds the same as the misinformation.

For instance, suppose you buy tea in Alphaland and sell it in Betaland. As a dishonest businessperson, you would like it if the people in Alphaland believed tea was cheaper that people in Betaland believed it was. However if there are two different verbal sentences kicking around about the price of tea, they will eventually run into each other, because sentences can spread fast.

A different solution is to corner the market for tea weighing devices in at least one nation. Then make them all give slightly biased readouts. Now tea costs the same amount per pound in the two places, but you just sell more pounds than you buy. The information and the misinformation both sound like “tea costs $10/lb”. Tea measuring devices cross the sea slower than words, so this might be more sustainable.

Relatedly, if you wanted to not have your children informed about Santa Claus, you might just call him something else—e.g. Joulupukki—in your home. If you want, you can tell them there is a probable faker called Santa Clause and it is a matter of controversy whether he is the real deal like Joulupukki. Because words refer to unusual things, the information—‘Santa Clause isn’t real’—sounds just like your misinformation.

This can really only work if people are sufficiently isolated that the differences in meanings don’t become obvious, but that sometimes happens.

II

I’m not much in favor of misinformation. But one time I was young and desperate and I did something like this.

From when I was a young teenager I was substantially in charge of raising my three younger brothers, and (because I was not a good necromancer) I had to keep the violence within certain limits.

First (I think) I tried to be nice. I sat down and talked to them about what had happened, and if someone had clearly been vicious, I sent him to his room or something. But it seemed that only punishing people who are clearly guilty leads to not-very-conscionable levels of violence to innocent children.

I pondered justice and mercy. I construed the situation game theoretically. I experimented with different rules and punishment regimes. The children bit themselves just to spite each other (I think).

I gave up on figuring out guilt, and tried just sending everyone to their room for every fight. They started fights just to see the innocent victim punished. They also destroyed parts of the house and its contents if they were annoyed about being sent to their rooms unjustifiedly, so sending people to their rooms was kind of costly.

I wondered whether children are oblivious to incentives, or just wisely refuse to negotiate with authorities, forcing the authorities to give up. But I couldn’t really give up, because I didn’t have any other options (I was a step ahead in the not negotiating, as the children might have realized if they had read Schelling).

This all took up a lot of the time that I wasn’t at school, if I recall. Every time I would sit down to read a book or something, I would be interrupted by shrieking. I really don’t like shrieking. At this point, I don’t even like the sound of joyful childish laughter, I think because I associate it with joyful childish unapologetic cruelty and hatred. But furthermore, I don’t like being interrupted every five minutes to have a big argument with some children. So I really didn’t like this situation.

My brothers were ‘meant to’ go to bed in the evening. If I started encouraging them to go to bed at about 10pm, that gave the four of us enough time to argue about whether they should or not for three hours before an adult came home and became angry about how the children weren’t in bed. At which point my brothers would go to bed, because the adults were bigger and more exhausted and more authoritative than me.

At some point I realized I had been thinking about things all wrong. All peace really required was for my brothers to believe that it was almost 1am. And my brothers’ beliefs about what time it was were almost entirely dependent on seven or so clocks. And clocks have little dials on the back of them that you can turn around to change where their hands are.

It was somewhat complicated, because there were a bunch of external signs about what time it was. For instance, school would end at 3pm or so. So it had to be 3.30 or so when the children returned home. After that I would gradually change all seven or so visible clocks in the house forward half an hour or an hour at a time, several times through the afternoon and evening. Then by about 8pm it would be past midnight, and the children would hurry off to bed before any adults came home. Then I would get hours (!!!) of solitude (!!!) and peace (!!!)

This was somewhat complicated by the TV schedule, which I said must have been printed for a state in a different time zone with somewhat different programming.

It was also complicated by me messing up one time, and my brother noticing that it was still light at midnight. But a conspiracy didn’t occur to him, and he dismissed it as ‘funny weather today’.

Ultimately the entire scheme was short lived, but only ended by my mother and stepfather objecting to it. My brothers didn’t suspect until I told them about it later.

So, that’s another example. I sometimes wonder if there is more of this kind of thing in the world.

Shame as low verbal justification alarm

What do you think feeling shame means?

You are scared that you might be ostracized from the group

But I feel shame about things that other people don’t even care about.

You are scared that you should be ostracized from the group.

That seems unrelated to the social realities. Why would evolution bother to equip me with a feeling for that?

Because you need to be able to justify yourself verbally. It is important that you think you should not be ostracized, so you can argue for it. Even if nobody shares your standards, if other people try to ostracize you for some stupid reason and you—for other reasons—believe that you really should be ostracized, then you are in trouble. You want to believe wholeheartedly that you are worthy of alliance. So if you don’t, an alarm bell goes off until you manage to justify yourself or sufficiently distance yourself from the activity that you couldn’t justify.

(From a discussion with John Salvatier)

 

Reputation bets

People don’t often put their money where their mouth is, but they do put their reputation where their mouth is all the time. If I say ‘The Strategy of Conflict is pretty good’ I am betting some reputation on you liking it if you look at it. If you do like it, you will think better of me, and if you don’t, you will think worse. Even if I just say ‘it’s raining’, I’m staking my reputation on this. If it isn’t raining, you will think there is something wrong with me. If it is raining, you will decrease your estimate of how many things are wrong with me the teensiest bit.

If we have reputation bets all the time, why would it be so great to have more money bets? 

Because reputation bets are on a limited class of propositions. They are all of the form ’doing X will make me look good’. This is pretty close to betting that an observer will endorse X. Such bets are most useful for statements that are naturally about what the observer will endorse. For instance (a) ’you would enjoy this blog’ is pretty close to (b) ‘you will endorse the claim that you would enjoy this blog’. It isn’t quite the same – for instance, if the listener refuses to look at the blog, but judges by its title that it is a silly blog, then (a) might be true while (b) is false. But still, if I want to bet on (a), betting on (b) is a decent proxy.

Reputation bets are also fairly useful for statements where the observer will mostly endorse true statements, such as ‘there is ice cream in the freezer’. Reputation bets are much less useful (for judging truth) where the observer is as likely to be biased and ignorant as the person making the statement. For instance, ‘removing height restrictions on buildings would increase average quality of life in our city’. People still do make reputation bets in these cases, but they are betting on their judgment of the other person’s views.

If the set of things where people mostly endorse true answers is roughly the set where it is pretty clear what the true answer is, then reputation bets do not buy much in the quest for truth. This seems not quite right though. One thing reputation bets do buy is prompting investment in finding out the answer, if it is somewhat expensive but worth finding out if it is a certain way. For instance, if it looks like all the restaurants are closed today so you want to turn around and go home, and I say ‘no, I promise the sushi place will be open’, then I am placing a reputation bet. It wouldn’t have been worth checking before, but my betting increases your credence that it is open, making it worth checking, which in turn provides the incentive for me to bet correctly.

Another place reputation bets are helpful is if a thing will be discovered clearly in the relatively near future, and it is useful to know beforehand. For instance, we can have a whole discussion of what we will do when we get back to my apartment that implies certain facts about my apartment. You can believe these ahead of time, and plan, because you will think worse of me if when we get there it turns out I made the whole thing up.

Good intuitions

Sometimes people have ‘good intuitions’. Which is to say something like, across a range of questions, they tend to be unusually correct for reasons that are hard to explain explicitly.

How do people come to have good intuitions? My first guess is that new intuitions are born from looking at the world, and naturally interpreting it using a bunch of existing intuitions. For instance, suppose I watch people talking for a while, and I have some intuitions about how humans behave, what they want, what their body language means, and how strategic people tend to be. Then I might come to have an intuition for how large a part status plays in human interactions, which I could then go on to use in other cases. If I had had different intuitions about those other things, or watched different people talking, I might have developed a different intuition about the relevance of status.

On this model, when a person has consistently unusually good intuitions, it could be that:

A) Their innate intuition forming machinery is good: perhaps they form hypotheses easily, or they avoid forming hypotheses too easily. Or they absorb others’ useful words into their intuitions easily.

B) They had a small number of particularly useful early intuitions, that tend to produce good further intuitions in the presence of the outside world.

C) They have observed more or higher quality empirical data across the areas where they have superior intuitions.

D) They got lucky, and randomly happen to have a lot of good intuitions instead of bad intuitions.

Which of these plays the biggest part seems important, for:

  • Judging intuitions in hard or unusual areas: If A), then good intuitions are fairly general. So good intuitions about math (testable) suggest good intuitions about how to avoid existential risk (harder to test). This is decreasingly the case as we move down the alphabet.
  • Spreading good intuitions: If B), then it might be possible to distill the small number of core intuitions a person with good intuitions has, and share them with other people.

I expect some of all of A-D play a part (and that I have forgotten more possibilities). But are some of them particularly common in people who have surprisingly good intuitions?