Tag Archives: altruism

A scene I once saw

(inaccurately recounted)

Ms. Knox: When any of you feels ready, you can move in to the center of the circle, hold the stone, and tell us all about your feelings about what we are doing. Listen to the trees moving, encouraging you.

Sarah: I feel really proud. Young people are so passionate about the environment. Everyone will have to believe in us when they see how much we care.

Amanda: Excited! I feel like we are going to be part of a really positive change, across the world. It’s so great to be here now, when this is happening.

Marie: I’m just really glad to be here with so many likeminded people. When nobody around you sees what’s possible, it can be really disillusioning, but here I feel like everyone cares so much.

Linda: I feel really inspired by what the others are saying!

Becky: I’m so hopeful when I see all this engagement. I believe we can all stay passionate and keep the movement going until we are old, and inspire the new youth!

Odette: Irritated! I have so many things I would enjoy doing more than saving the environment, both this weekend and for the rest of my life. Preventing ecological catastrophe is very important, but I’d obviously much much prefer that someone else had done it already, or that it never needed doing. It’s extremely disappointing that after this many generations nobody’s got around to the most obvious solutions like taxing the big externalities. These things are not even interesting to think about. In a perfect world it would be nice to play video games most of the time, but I’m at least as frustrated that I won’t even get to work on the interesting altruistic endeavors.


Why is this so rare?

Is it obvious that pain is very important?

“Never, for any reason on earth, could you wish for an increase of pain. Of pain you could wish only one thing: that it should stop. Nothing in the world was so bad as physical pain. In the face of pain there are no heroes, no heroes [...].  –George Orwell, 1984 via Brian Tomasik , who seems to agree that just considering pain should be enough to tell you that it’s very important.

It seems quite a few people I know consider pain to have some kind of special status of badness, and that preventing it is thus much more important than I think it. I wouldn’t object, except that they apply this in their ethics, rather than just their preferences regarding themselves. For instance arguing that other people shouldn’t have children, because of the possibility of those children suffering pain. I think pain is less important to most people relative to their other values than such negative utilitarians and similar folk believe.

One such argument for the extreme importance of pain is something like ‘it’s obvious’. When you are in a lot of pain, nothing seems more important than stopping that pain. Hell, even when you are in a small amount of pain, mitigating it seems a high priority. When you are looking at something in extreme pain, nothing seems more important than stopping that pain. So pain is just obviously the most important bad thing there is. The feeling of wanting a boat and not having one just can’t compare to pain. The goodness of lying down at the end of a busy day is nothing next to the badness of even relatively small pains.

I hope I do this argument justice, as I don’t have a proper written example of it at hand.

An immediate counter is that when we are not in pain, or directly looking at things in pain, pain doesn’t seem so important. For instance, though many people in the thralls of a hangover consider it to be pretty bad, they are repeatedly willing to trade half a day of hangover for an evening of drunkenness. ‘Ah’, you may say, ‘that’s just evidence that life is bad – so bad that they are desperate to relieve themselves from the torment of their sober existences! So desperate that they can’t think of tomorrow!’. But people have been known to plan drinking events, and even to be in quite good spirits in anticipation of the whole thing.

It is implicit in the argument from ‘pain seems really bad close up’ that pain does not seem so bad from a distance. How then to know whether your near or far assessment is better?

You could say that up close is more accurate, because everything is more accurate with more detail. Yet since this is a comparison between different values, being up close to one relative to others should actually bias the judgement.

Perhaps up close is more accurate because at a distance we do our best not to think about pain, because it is the worst thing there is.

If you are like many people, when you are eating potato chips, you really want to eat more potato chips. Concern for your health, your figure, your experience of nausea all pale into nothing when faced with your drive to eat more potato chips. We don’t take that as good evidence that really deep down you want to eat a lot of potato chips, and you are just avoiding thinking about it all the rest of the time to stop yourself from going crazy. How is that different?

Are there other reasons to pay special attention to the importance of pain to people who are actually experiencing it?

Added: I think I have a very low pain threshold, and am in a lot of pain far more often than most people. I also have bad panic attacks from time to time, which I consider more unpleasant than any pain I have come across, and milder panic attacks frequently. So it’s not that I don’t know what I’m talking about. I agree that suffering comes with (or consists of) an intense urge to stop the suffering ASAP. I just don’t see that this means that I should submit to those urges the rest of the time. To the contrary! It’s bad enough to devote that much time to such obsessions. When I am not in pain I prefer to work on other goals I have, like writing interesting blog posts, rather than say trying to discover better painkillers. I am not willing to experiment with drugs that could help if I think they might interfere with my productivity in other ways. Is that wrong?

I am anti-awareness and you should be too

People seem to like raising awareness a lot. One might suspect too much, assuming the purpose is to efficiently solve whatever problem the awareness is being raised about. It’s hard to tell whether it is too much by working out how much is the right amount then checking if it matches what people do. But a feasible heuristic approach is to consider factors that might bias people one way or the other, relative to what is optimal.

Christian Lander at Stuff White People Like suggests some reasons raising awareness should be an inefficiently popular solution to other people’s problems:

This belief [that raising awareness will solve everything] allows them to feel that sweet self-satisfaction without actually having to solve anything or face any difficult challenges…

What makes this even more appealing for white people is that you can raise “awareness” through expensive dinners, parties, marathons, selling t-shirts, fashion shows, concerts, eating at restaurants and bracelets.  In other words, white people just have to keep doing stuff they like, EXCEPT now they can feel better about making a difference…

So to summarize – you get all the benefits of helping (self satisfaction, telling other people) but no need for difficult decisions or the ensuing criticism (how do you criticize awareness?)…

He seems to suspect that people are not trying to solve problems, but I shan’t argue about that here. At least some people think that they are trying to effectively campaign; this post is concerned with biases they might face. Christian  may or may not demonstrate a bias for these people. All things equal, it is better to solve problems in easy, fun, safe ways. However if it is easier to overestimate the effectiveness of easy, fun, safe things,  we probably raise awareness too much. I suspect this is true. I will add three more reasons to expect awareness to be over-raised.

First, people tend to identify with their moral concerns. People identify with moral concerns much more than they do with their personal, practical concerns for instance. Those who think the environment is being removed too fast are proudly environmentalists while those who think the bushes on their property are withering too fast do not bother to advertise themselves with any particular term, even if they spend much more time trying to correct the problem. It’s not part of their identity.

People like others to know about their identities. And raising awareness is perfect for this. Continually incorporating one’s concern about foreign forestry practices into conversations can be awkward, effortful and embarrassing. Raising awareness displays your identity even more prominently, while making this an unintended side effect of costly altruism for the cause rather than purposeful self advertisement.

That raising awareness is driven in part by desire to identify is evidenced by the fact that while ‘preaching to the converted’ is the epitome of verbal uselessness, it is still a favorite activity for those raising awareness, for instance at rallies, dinners and lectures. Wanting to raise awareness to people who are already well aware suggests that the information you hope to transmit is not about the worthiness of the cause. What else new could you be showing them? An obvious answer is that they learn who else is with the cause. Which is some information about the worthiness of the cause, but has other reasons for being presented. Robin Hanson has pointed out that breast cancer awareness campaign strategy relies on everyone already knowing about not just breast cancer but about the campaign. He similarly concluded that the aim is probably to show a political affiliation.

These are some items given away to promote Bre...

Image via Wikipedia

In many cases of identifying with a group to oppose some foe, it is useful for the group if you often declare your identity proudly and commit yourself to the group. If we are too keen to raise awareness about our identites, perhaps we are just used to those cases, and treat breast cancer like any other enemy who might be scared off by assembling a large and loyal army who don’t like it. I don’t know. But for whatever reason, I think our enthusiasm for increased awareness of everything is given a strong push by our enthusiasm for visible identifying with moral causes.

Secondly and relatedly, moral issues arouse a  person’s drive to determine who is good and who is bad, and to blame the bad ones. This urge to judge and blame should  for instance increase the salience of everyone around you eating meat if you are a vegetarian. This is at the expense of giving attention to any of the larger scale features of the world which contribute to how much meat people eat and how good or bad this is for animals. Rather than finding a particularly good way to solve the problem of too many animals suffering, you could easily be sidetracked by fact that your friends are being evil. Raising awareness seems like a pretty good solution if the glaring problem is that everyone around you is committing horrible sins, perhaps inadvertently.

Lastly, raising awareness is specifically designed to be visible, so it is intrinsically especially likely to spread among creatures who copy one another. If I am concerned about climate change, possible actions that will come to mind will be those I have seen others do. I have seen in great detail how people march in the streets or have stalls or stickers or tell their friends. I have little idea how people develop more efficient technologies or orchestrate less publicly visible political influence, or even how they change the insulation in their houses. This doesn’t necessarily mean that there is too much awareness raising; it is less effort to do things you already know how to do, so it is better to do them, all things equal. However too much awareness raising will happen if we don’t account for there being a big selection effect other than effectiveness in which solutions we will know about, and expend a bit more effort finding much more effective solutions accordingly.

So there are my reasons to expect too much awareness is raised. It’s easy and fun, it lets you advertise your identity, it’s the obvious thing to do when you are struck by the badness of those around you, and it is the obvious thing to do full stop. Are there any opposing reasons people would tend to be biased against raising awareness? If not, perhaps I should reconsider stopping telling you about this problem and finding a more effective way to lower awareness instead.

Is cryonicists’ selfishness distance induced?

Tyler‘s criticism of cryonics, shared by others including me at times:

Why not save someone else’s life instead?

This applies to all consumption, so is hardly a criticism of cryonics, as people pointed out. Tyler elaborated that it just applies to expressive expenditures, which Robin pointed out still didn’t pick out cryonics over the the vast assortment of expressive expenditures that people (who think cryonics is selfish) are happy with. So why does cryonics instinctively seem particularly selfish?

I suspect the psychological reason cryonics stands out as selfish is that we rarely have the opportunity to selfishly splurge on something so far in the far reaches of far mode as cryonics, and far mode is the standard place to exercise our ethics.

Cryonics is about what will happen in a *long time* when you *die*  to give you a *small chance* of waking up in a *socially distant* society in the *far future*, assuming you *widen your concept* of yourself to any *abstract pattern* like the one manifested in your biological brain and also that technology and social institutions *continue their current trends* and you don’t mind losing *peripheral features* such as your body (not to mention cryonics is *cold* and seen to be the preserve of *rich* *weirdos*).

You’re not meant to be selfish in far mode! Freeze a fair princess you are truly in love with or something.  Far mode livens our passion for moral causes and abstract values.  If Robin is right, this is because it’s safe to be ethical about things that won’t affect you yet it still sends signals to those around you about your personality. It’s a truly mean person who won’t even claim someone else a long way away should have been nice fifty years ago.  So when technology brings the potential for far things to affect us more, we mostly don’t have the built in selfishness required to zealously chase the offerings.

This theory predicts that other personal expenditures on far mode items will also seem unusually selfish. Here are some examples of psychologically distant personal expenditures to test this:

  • space tourism
  • donating to/working on life extension because you want to live forever
  • traveling in far away socially distant countries without claiming you are doing it to benefit or respect the locals somehow
  • astronomy for personal gain
  • buying naming rights to stars
  • lottery tickets
  • maintaining personal collections of historical artifacts
  • building statues of yourself to last long after you do
  • recording your life so future people can appreciate you
  • leaving money in your will to do something non-altruistic
  • voting for the party that will benefit you most
  • supporting international policies to benefit your country over others

I’m not sure how selfish these seem compared to other non-altruistic purchases. Many require a lot of money, which makes anything seem selfish I suspect. What do you think?

If this theory is correct, does it mean cryonics is unfairly slighted because of a silly quirk of psychology? No. Your desire to be ethical about far away things is not obviously less real or legitimate than your desire to be selfish about near things, assuming you act on it. If psychological distance really is morally relevant to people, it’s consistent to think cryonics too selfish and most other expenditures not. If you don’t want psychological distance to be morally relevant then you have an inconsistency to resolve, but how you should resolve it isn’t immediately obvious. I suspect however that as soon as you discard cryonics as too selfish you will get out of far mode and use that money on something just as useless to other people and worth less to yourself, but in the realm more fitting for selfishness. If so, you lose out on a better selfish deal for the sake of not having to think about altruism. That’s not altruistic, it’s worse than selfishness.

Heuristics for a good life

I wondered what careers or the like help other people the most. Tyler reposted my question, adding:

Let’s assume pure marginalist act utilitarianism, namely that you choose a career and get moral credit only for the net change caused by your selection.  Furthermore, I’ll rule out “become a billionaire and give away all your money” or “cure cancer” by postulating that said person ends up at the 90th percentile of achievement in the specified field but no higher.

And answered:

What first comes to mind is “honest General Practitioner who has read Robin Hanson on medicine.”  If other countries are fair game, let’s send that GP to Africa.  No matter what the locale, you help some people live and good outcomes do not require remarkable expertise.  There is a shortage of GPs in many locales, so you make specialists more productive as well.  Public health and sanitation may save more lives than medicine, but the addition of a single public health worker may well have a smaller marginal impact, given the greater importance of upfront costs in that field.

I’m not convinced that the Hansonian educated GP would be much better than a materialism educated spiritual healer. Tyler’s commenters have a lot of suggestions for where big positives might be too – but all jobs have positives and many of them seem important, so how to compare? Unfortunately calculating the net costs and benefits of all the things one could do with oneself is notoriously impossible. So how about some heuristics for what types of jobs tend to be more socially beneficial?

Here some ideas, please extend and criticize:

  • Low displacement: if someone had to be hired, you only add the difference between your ability and the second best candidate (plus the second best candidate’s efforts to another job at random). The same goes for what you produce. Even if creating beautiful music doesn’t knock another musician out of business, people listen to your new song instead of older songs, which are not seemingly any worse.
  • Big gains to a marginal person being better: careers that fail the above can still rate highly if this is so. This is a hard route because if candidate quality matters more there will generally be stronger selection so you will be more average for who they let in. Your best bet here might be in something important to you which usually attracts people with poor abilities.
  • Goal discretion: one way a displacing person can make a relatively big difference is to have a better purpose than the person they displace in a position where that matters. This is more likely in jobs that aren’t meant to be altruistic. e.g. if you head an NGO for delivering fresh water to poor households, you only add to the cause to the extent that you are better at it than the alternative guy. If you are an academic with some choice in research topics and no imperative to be altruistic, you can add a person’s worth of effort to an important topic at the expense of an unimportant topic without having to be any better at researching. Similar for shop owners and heads of state.
  • Profits: if you are making huge profits, it’s probably good.
  • Positive externalities: same.
  • Ongoing effects: usually a kind of positive externality. Nice cake lasts minutes whereas nice ideas last a long time (though someone else would have thought of them eventually).
  • Negative externalities
  • Fueling status races or other zero sum games: arguably things people buy to look better than other people do not cause any overall benefit, so you shouldn’t read too much into the profitability of selling such things. Similar for weapons in at least some situations. Same for selling any other negative externalizing capabilities.
  • Speeding trends: culturesome things tend to cycle through new variants continually. Contributing a song, movie, or meme variant for a current trend certainly makes a lot of people like you (briefly). But it’s not clear what people value here: each new item while it is new, or being up to date with the current trends in general. If the former you are doing them a service by adding your bit of culture, if the latter you are just making it more expensive for them to keep up.
  • Effects on galaxy colonization chances and speed: this matters so much you can ignore all the other things.

What jobs do well on most of these things?

Might law save us from uncaring AI?

Robin has claimed a few times that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.

When is it efficient to kill humans?

At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?

What does law do?

In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law  basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong. Continue reading

Generous people cross the street before the beggar

Robert Wiblin points to a study showing that the most generous people are the most keen to avoid situations where they will be generous, even though the people they would have helped will go without.

We conduct an experiment to demonstrate the importance of sorting in the context of social preferences. When individuals are constrained to play a dictator game, 74% of the subjects share. But when subjects are allowed to avoid the situation altogether, less than one third share. This reversal of proportions illustrates that the influence of sorting limits the generalizability of experimental findings that do not allow sorting. Moreover, institutions designed to entice pro-social behavior may induce adverse selection. We find that increased payoffs prevent foremost those subjects from opting out who share the least initially. Thus the impact of social preferences remains much lower than in a mandatory dictator game, even if sharing is subsidized by higher payoffs…

A big example of generosity inducing institutions causing adverse selection is market transactions with poor people.

For some reason we hold those who trade with another party responsible for that party’s welfare. We blame a company for not providing its workers with more, but don’t blame other companies for lack of charity to the same workers. This means that you can avoid responsibility to be generous by not trading with poor people.

Many consumers feel that if they are going to trade with poor people they should buy fair trade or thoroughly research the supplier’s niceness. However they don’t have the money or time for those, so instead just avoid buying from poor people. Only the less ethical remain to contribute to the purses of the poor.

Probably the kindest girl in my high school said to me once that she didn’t want a job where she would get rich because there are so many poor people in the world. I said that she should be rich and give the money to the poor people then. Nobody was wowed by this idea. I suspect something similar happens often with people making business and employment decisions. Those who have qualms about a line of business such as trade with poor people tend not to go into that, but opt for something guilt free already, while the less concerned do the jobs where compassion might help.

Why do animal lovers want animals to feel pain?

Behind the vail of (lots of) ignorance, would you rather squished chickens be painless?

Behind the vail of (lots of) ignorance, would you rather squished chickens be painless?

We may soon be able to make pain-free animals, according to New Scientist. The study they reported on finds that people not enthused by creating such creatures for scientific research, which is interesting. Robin Hanson guessed prior to seeing the article that this was because endorsing pain free animals would require thinking that farmed animals now were in more pain than wild animals, which people don’t think. However it turns out that vegetarians and animal welfare advocates were much more opposed to the idea than others in the study, so another explanation is needed.

Robert Wiblin suggested to me that vegetarians are mostly in favor of animals not being used, as well as not being hurt, so they don’t want to support pain-free use, as that is supporting use. He made this comparison:

Currently children are being sexually abused. The technology now exists to put them under anaesthetic so that they don’t experience the immediate pain of sexual abuse. Should we put children under anaesthetic to sexually abuse them?

A glance at the comments on other sites reporting the possibility of painless meat suggests vegetarians cite this along with a lot of different reasons for disapproval. And sure enough it seems mainly meat eaters who say eliminating pain would make them feel better about eating meat. The reasons vegetarians (and others) give for not liking the idea, or for not being more interested in pain-free meat, include:

  • The animals would harm themselves without knowing
  • Eating animals is bad for environmental or health reasons
  • Killing is always wrong
  • Animals have complex social lives and are sad when their family are killed, regardless of pain
  • Animals are living things [?!]
  • There are other forms of unpleasantness, such as psychological torture
  • How can we tell they don’t feel pain?
  • We will treat them worse if we think they can’t feel it, and we might be wrong
  • There are better solutions, such as not eating meat
  • It’s weird, freaky, disrespectful
  • It’s selfish and unnecessary for humans to do this to animals

Many reasonable reasons. The fascinating thing though is that vegetarians seem to consistently oppose the idea, yet not share reasons. Three (not mutually exclusive) explanations:

  1. Vegetarians care more about animals in general, so care about lots of related concerns.
  2. Once you have an opinion, you collect a multitude of reasons to have it. When I was a vegetarian I thought meat eating was bad for the environment, bad for people who need food, bad for me, maybe even bad for animals. This means when a group of people lose one reason to hold a shared belief they all have other reasons to put forward, but not necessarily the same ones.
  3. There’s some single reason vegetarians are especially motivated to oppose pain-free meat, so they each look for a reason to oppose it, and come across different ones, as there are many.

I’m interested by 3 because the situation reminds me of a pattern in similar cases I have noticed before. It goes like this. Some people make personal sacrifices, supposedly toward solving problems that don’t threaten them personally. They sort recycling, buy free range eggs, buy fair trade, campaign for wealth redistribution etc. Their actions are seen as virtuous. They see those who don’t join them as uncaring and immoral. A more efficient solution to the problem is suggested. It does not require personal sacrifice. People who have not previously sacrificed support it. Those who have previously sacrificed object on grounds that it is an excuse for people to get out of making the sacrifice.

The supposed instrumental action, as the visible sign of caring, has become virtuous in its own right. Solving the problem effectively is an attack on the moral people – an attempt to undermine their dream of a future where everybody longs to be very informed on the social and environmental effects of their consumption choices or to sort their recycling really well. Some examples of this sentiment:

  • A downside to recreating extinct species with cloning is that it will let people bother even less about stopping extinctions.
  • A recycling system where items are automatically and efficiently sorted at the plant rather than individually in homes would be worse because then people would be ignorant about the effort it takes to recycle.
  • Modern food systems lamentably make people lazy and ignorant of where their food comes from.
  • Making cars efficient just lets people be lazy and drive them more, rather than using real solutions like bikes.
  • The internet’s ready availability and general knowledge allows people to be ignorant and not bother learning facts.

In these cases, having solved a problem a better way should mean that efforts to solve it via personal sacrifice can be lessened. This would be a good thing if we wanted to solve the problem, and didn’t want to sacrifice. We would rejoice at progress allowing ever more ignorance and laziness on a given issue. But often we instead regret the end of an opportunity to show compassion and commitment. Especially when we were the compassionate, committed ones.

Is vegetarian opposition to preventing animal pain an example of this kind of motivation? Vegetarianism is a big personal effort, a moral issue, a cause of feelings of moral superiority, and a feature of identity which binds people together. It looks like other issues where people readily claim fear of an end to virtuous efforts.  How should we distinguish between this and the other explanations?

Charitable explanation

Is anyone really altruistic? The usual cynical explanations for seemingly altruistic behavior are that it makes one feel good, it makes one look good, and it brings other rewards later. These factors are usually present, but how much do they contribute to motivation?

One way to tell if it’s all about altruism is to invite charity that explicitly won’t benefit anyone. Curious economists asked their guinea pigs for donations to a variety of causes, warning them:

“The amount contributed by the proctor to your selected charity WILL be reduced by however much you pass to your selected charity. Your selected charity will receive neither more nor less than $10.”

Many participants chipped in nonetheless:

We find that participants, on average, donated 20% of their endowments and that approximately 57% of the participants made a donation.

This is compared to giving an average of 30-49% in experiments where donating benefited the cause, but it is of course possible that knowing you are helping offers more of a warm glow. It looks like at least half of giving isn’t altruistic at all, unless the participants were interested in the wellbeing of the experimenters’ funds.

The opportunity to be observed by others also influences how much we donate, and we are duly rewarded with reputation:

Here we demonstrate that more subjects were willing to give assistance to unfamiliar people in need if they could make their charity offers in the presence of their group mates than in a situation where the offers remained concealed from others. In return, those who were willing to participate in a particular charitable activity received significantly higher scores than others on scales measuring sympathy and trustworthiness.

This doesn’t tell us whether real altruism exists though. Maybe there are just a few truly altruistic deeds out there? What would a credibly altruistic act look like?

Fortunately for cute children desirous of socially admirable help, much charity is not driven by altruism (picture: Laura Lartigue)

Fortunately for cute children desirous of socially admirable help, much charity is not driven by altruism (picture: Laura Lartigue)

If an act made the doer feel bad, look bad to others, and endure material cost, while helping someone else, we would probably be satisfied that it was altruistic. For instance if a person killed their much loved grandmother to steal her money to donate to a charity they believed would increase the birth rate somewhere far away, at much risk to themselves, it would seem to escape the usual criticisms. And there is no way you would want to be friends with them.

So why would anyone tell you if they had good evidence they had been altruistic? The more credible evidence should look particularly bad. And if they were keen to tell you about it anyway, you would have to wonder whether it was for show after all. This makes it hard for an altruist to credibly inform anyone that they were altruistic. On the other hand the non-altruistic should be looking for any excuse to publicize their good deeds. This means the good deeds you hear about should be very biased toward the non-altruistic. Even if altruism were all over the place it should be hard to find. But it’s not, is it?

Is your subconscious communist?

People can be hard to tell apart, even to themselves (picture: Giustino)

People can be hard to tell apart, even to themselves (picture: Giustino)

Humans make mental models of other humans automatically, and appear to get somewhat confused about who is who at times.  This happens with knowledge, actions, attention and feelings:

Just having another person visible hinders your ability to say what you can see from where you stand, though considering a non-human perspective does not:

[The] participants were also significantly slower in verifying their own perspective when the avatar’s perspective was incongruent. In Experiment 2, we found that the avatar’s perspective intrusion effect persisted even when participants had to repeatedly verify their own perspective within the same block. In Experiment 3, we replaced the avatar by a bicolor stick …[and then] the congruency of the local space did not influence participants’ response time when they verified the number of circles presented in the global space.

Believing you see a person moving can impede you in moving differently, similar to rubbing your tummy while patting your head, but if you believe the same visual stimulus is not caused by a person, there is no interference:

[A] dot display followed either a biologically plausible or implausible velocity profile. Interference effects due to dot observation were present for both biological and nonbiological velocity profiles when the participants were informed that they were observing prerecorded human movement and were absent when the dot motion was described as computer generated…

Doing  a task where the cues to act may be incongruent with the actions (a red pointer signals that you should press the left button, whether the pointer points left or right, and a green pointer signals right), the incongruent signals take longer to respond to than the congruent ones. This stops when you only have to look after one of the buttons. But if someone else picks up the other button, it becomes harder once again to do incongruent actions:

The identical task was performed alone and alongside another participant. There was a spatial compatibility effect in the group setting only. It was similar to the effect obtained when one person took care of both responses. This result suggests that one’s own actions and others’ actions are represented in a functionally equivalent way.

You can learn to subconsciously fear a stimulus by seeing the stimulus and feeling pain, but not by being told about it. However seeing the stimulus and watching someone react to pain, works like feeling it yourself:

In the Pavlovian group, the CS1 was paired with a mild shock, whereas the observational-learning group learned through observing the emotional expression of a confederate receiving shocks paired with the CS1. The instructed-learning group was told that the CS1 predicted a shock…As in previous studies, participants also displayed a significant learning response to masked [too fast to be consciously perceived] stimuli following Pavlovian conditioning. However, whereas the observational-learning group also showed this effect, the instructed-learning group did not.

A good summary of all this, Implicit and Explicit Processes in Social Cognition, interprets that we are subconsciously nice:

Many studies show that implicit processes facilitate
the sharing of knowledge, feelings, and actions, and hence, perhaps surprisingly, serve altruism rather
than selfishness. On the other hand, higher-level conscious processes are as likely to be selfish as prosocial.

…implicit processes facilitate the sharing of knowledge, feelings, and actions, and hence, perhaps surprisingly, serve altruism rather than selfishness. On the other hand, higher-level conscious processes are as likely to be selfish as prosocial.

It’s true that these unconscious behaviours can help us cooperate, but it seems they are no more ‘altruistic’ than the two-faced conscious processes the authors cite as evidence for conscious selfishness. Our subconsciouses are like the rest of us; adeptly ‘altruistic’ when it benefits them, such as when watched. For an example of how well designed we are in this regard consider the automatic empathic expression of pain we make upon seeing someone hurt. When we aren’t being watched, feeling other people’s pain goes out the window:

…A 2-part experiment with 50 university students tested the hypothesis that motor mimicry is instead an interpersonal event, a nonverbal communication intended to be seen by the other….The victim of an apparently painful injury was either increasingly or decreasingly available for eye contact with the observer. Microanalysis showed that the pattern and timing of the observer’s motor mimicry were significantly affected by the visual availability of the victim.