Tag Archives: coordination

Humor isn’t norm evasion

Robin adds the recent theory that humor arises from benign norm violations to his Homo Hypocritus model:

The Homo Hypocritus (i.e., man the sly rule bender) hypothesis I’ve been exploring lately is that humans evolved to appear to follow norms, while covertly coordinating to violate norms when mutually advantageous. A dramatic example of this seems to be the sheer joy and release we feel when we together accept particular norm violations.  Apparently much “humor” is exactly this sort of joy:

[The paper:]The benign-violation [= humor] hypothesis suggests that three conditions are jointly necessary and sufficient for eliciting humor: A situation must be appraised as a [norm] violation, a situation must be appraised as benign, and these two appraisals must occur simultaneously.will be amused. Those who do not simultaneously see both interpretations will not be amused.

In five experimental studies, … we found that benign moral violations tend to elicit laughter (Study 1), behavioral displays of amusement (Study 2), and mixed emotions of amusement and disgust (Studies 3–5). Moral violations are amusing when another norm suggests that the behavior is acceptable (Studies 2 and 3), when one is weakly committed to the violated norm (Study 4), or when one feels psychologically distant from the violation (Study 5). …

We investigated the benign-violation hypothesis in the domain of moral violations. The hypothesis, however, appears to explain humor across a range of domains, including tickling, teasing, slapstick, and puns. (more;HT)

[Robin:] Laughing at the same humor helps us coordinate with close associates on what norms we expect to violate together (and when and how). This may be why it is more important to us that close associates share our sense of humor, than our food or clothing tastes, and why humor tastes vary so much from group to group.

I disagree with the theory and with Robin’s take on it.

Benign social norm violations are often not funny:

Yesterday I drove home drunk, but there was almost nobody out that late anyway.

Some people tell small lies in job interviews.

You got his name wrong, but I don’t think he noticed

Things are often funny without being norm violations:

People we don’t sympathize with falling over, being fat, being ugly, making mistakes, having stupid beliefs

People trying to gain status we think they don’t deserve and failing (note that it is their failure that is funny, not their norm-violating arrogance) or acting as though they have status when they are being made fools of really

Silly things being treated as though they are dangerous or important e.g. Monty Python’s killer rabbit, and the board game Munchkin’s ‘boots of but kicking’ and most of its other jokes

Note that the first two are cases of people we don’t sympathize with having their status lowered, and the third signifies someone acting as if they are inferior to the point of absurdity. Social norm violation often involves someone’s status being lowered, either the norm violating party if they fail or whoever they are committing a violation against. And when people or groups we dislike lose status, this is benign to us. So benign norm violations often coincide with people we don’t care for losing status. There are varieties of benign violation where we are not harmed but where nobody else we know of or dislike loses status,  and these don’t seem to be funny. All of the un-funny social norm violations I mentioned first are like this. So I think ‘status lowering of those we don’t care for’ is more promising a commonality than ‘benign norm violations’.

I don’t think the benign norm violation view of humor is much use in the Homo Hypocritus model for three reasons. Humor can’t easily allow people to agree on what norms to violate since a violation’s being benign is often a result of the joke being about a distant story that can’t affect you, rather than closely linked to the nature of the transgression. Think of baby in the blender jokes. More likely it helps to coordinate who to transgress against. If I hear people laughing at a political leader portrayed doing a silly dance I infer much more confidently that they don’t respect the political leader than that they would be happy to do silly dances with me in future.

Second, if it were the case that humor was a signal between people about what norms to violate, you would not need to get the humor to get the message, so the enjoyment seems redundant. You don’t have to find a joke amusing to see what norm is violated in it, especially if you are the party who likes the norm and would like to prevent conspiracies to undermine it. So this theory doesn’t explain people liking to have similar humor to their friends, nor the wide variety, nor the special emotional response rather than just saying ‘hey, I approve of Irishmen doing silly things, so if you’re Irish we could be silly together later’. You could argue that the emotional response is needed so that the person who makes the joke can judge whether their friends are really loyal to the cause of transgressing this norm, but people laugh at jokes they don’t find that funny all the time.

Last, if you want to conspire to break a social norm together, you would do well to arrange this quietly, not with loud, distinctive cackles.

That said, these are interesting bits of progress, and I don’t have a complete better theory tonight.

Might law save us from uncaring AI?

Robin has claimed a few times that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.

When is it efficient to kill humans?

At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?

What does law do?

In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law  basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong. Continue reading

Is your subconscious communist?

People can be hard to tell apart, even to themselves (picture: Giustino)

People can be hard to tell apart, even to themselves (picture: Giustino)

Humans make mental models of other humans automatically, and appear to get somewhat confused about who is who at times.  This happens with knowledge, actions, attention and feelings:

Just having another person visible hinders your ability to say what you can see from where you stand, though considering a non-human perspective does not:

[The] participants were also significantly slower in verifying their own perspective when the avatar’s perspective was incongruent. In Experiment 2, we found that the avatar’s perspective intrusion effect persisted even when participants had to repeatedly verify their own perspective within the same block. In Experiment 3, we replaced the avatar by a bicolor stick …[and then] the congruency of the local space did not influence participants’ response time when they verified the number of circles presented in the global space.

Believing you see a person moving can impede you in moving differently, similar to rubbing your tummy while patting your head, but if you believe the same visual stimulus is not caused by a person, there is no interference:

[A] dot display followed either a biologically plausible or implausible velocity profile. Interference effects due to dot observation were present for both biological and nonbiological velocity profiles when the participants were informed that they were observing prerecorded human movement and were absent when the dot motion was described as computer generated…

Doing  a task where the cues to act may be incongruent with the actions (a red pointer signals that you should press the left button, whether the pointer points left or right, and a green pointer signals right), the incongruent signals take longer to respond to than the congruent ones. This stops when you only have to look after one of the buttons. But if someone else picks up the other button, it becomes harder once again to do incongruent actions:

The identical task was performed alone and alongside another participant. There was a spatial compatibility effect in the group setting only. It was similar to the effect obtained when one person took care of both responses. This result suggests that one’s own actions and others’ actions are represented in a functionally equivalent way.

You can learn to subconsciously fear a stimulus by seeing the stimulus and feeling pain, but not by being told about it. However seeing the stimulus and watching someone react to pain, works like feeling it yourself:

In the Pavlovian group, the CS1 was paired with a mild shock, whereas the observational-learning group learned through observing the emotional expression of a confederate receiving shocks paired with the CS1. The instructed-learning group was told that the CS1 predicted a shock…As in previous studies, participants also displayed a significant learning response to masked [too fast to be consciously perceived] stimuli following Pavlovian conditioning. However, whereas the observational-learning group also showed this effect, the instructed-learning group did not.

A good summary of all this, Implicit and Explicit Processes in Social Cognition, interprets that we are subconsciously nice:

Many studies show that implicit processes facilitate
the sharing of knowledge, feelings, and actions, and hence, perhaps surprisingly, serve altruism rather
than selfishness. On the other hand, higher-level conscious processes are as likely to be selfish as prosocial.

…implicit processes facilitate the sharing of knowledge, feelings, and actions, and hence, perhaps surprisingly, serve altruism rather than selfishness. On the other hand, higher-level conscious processes are as likely to be selfish as prosocial.

It’s true that these unconscious behaviours can help us cooperate, but it seems they are no more ‘altruistic’ than the two-faced conscious processes the authors cite as evidence for conscious selfishness. Our subconsciouses are like the rest of us; adeptly ‘altruistic’ when it benefits them, such as when watched. For an example of how well designed we are in this regard consider the automatic empathic expression of pain we make upon seeing someone hurt. When we aren’t being watched, feeling other people’s pain goes out the window:

…A 2-part experiment with 50 university students tested the hypothesis that motor mimicry is instead an interpersonal event, a nonverbal communication intended to be seen by the other….The victim of an apparently painful injury was either increasingly or decreasingly available for eye contact with the observer. Microanalysis showed that the pattern and timing of the observer’s motor mimicry were significantly affected by the visual availability of the victim.