[Epistemic status: speculation]
If moral progress is so important, probably we should try to improve it.
1. Why have ordinary people been immoral en masse?
From a previous post:
I would like to think I wouldn’t have been friends with slave owners, anti-semites or wife-beaters, but then again most of my friends couldn’t give a damn about the suffering of animals, so I guess I would have been. – Robert Wiblin
I expect the same friends would have been any of those things too, given the right place and period of history. The same ‘faults’ appear to be responsible for most old fashioned or foreign moral failings: not believing that anything bad is happening if you don’t feel bad about it, and not feeling bad about anything unless there is a social norm of feeling bad about it.
That is, I claim that the procedure individuals use for morality has these key components:
- Conformist moral affect: people have moral feelings, and these mostly reflect what their peers deem right or wrong.
- Dictatorship of moral affect: moral feelings directly determine what people endorse.
So for instance if everyone around tortures puppies, most people consequently feel ok about puppy torture. And then if you feel ok torturing puppies, you assume that you are in fact ok with it, rather than for instance doing an extra step of conscious deliberation to check this.
(You might wonder what you would be consciously deliberating here: I’m not taking a stance on ethics or meta-ethics, but I think many popular stances do not equate moral correctness with ‘what a person feels like’ so it should often be intelligible to check that one endorses the output of one’s moral feelings.)
This is all a complicated way of saying ‘people do bad because they copy other people who do bad’.
I think it is valuable to say it in the complicated way, because it helps with seeing what might be done differently. It also makes it clearer why things are not so bad—if people only ever copied other people, human morality would be random, which I think is false.
I could say more about why I believe these things, but I probably won’t unless anyone especially disagrees.
2. Should everyone use a different procedure instead?
I claim that while these procedures lead to terrible moral failings by otherwise nice people, they also lead to virtually all nice moral behavior by nice people. So I wouldn’t want to abandon them hastily.
Plus, the obvious alternative seems worse. I’d probably much rather live in a society largely comprised of sheep who follow others’ lead on moral issues than one where every individual reasoned about morality themselves from first principles—in whatever time they decided to allocate to the project—and then took their conclusions seriously.
But I expect that there are mild variations on the status quo that are improvements. If we look at how change usually happens, possibly we can direct it a bit.
3. How do morals change?
On the story here, moral views should be basically stable apart from gradual drift. They would change faster if people sometimes have anomalous moral feelings (i.e. those that don’t reflect the existing consensus around them), or if some people think about what is right independent of their own feelings.
For instance, a world that doesn’t care about animal welfare would likely remain so until enough people have strong empathy toward animals that causes them to feel bad about animal suffering in spite of popular indifference, or until some people think about whether they endorse animal suffering from some abstract standpoint (such as utilitarianism), and condemn it in spite of having few feelings about it. This sounds about right to me as key ways that moral change happens, but I don’t know a lot about the history of this so I could easily be wrong.
4. How could morals change more and better?
There are probably lots of things to say about this, but I’ll say some random ones that I thought of.
I said that society moves away from existing moral equilibria by people having anomalous feelings, or people deciding to think about what is right independent of feelings. So things are likely to change more both when more people do those things more, and when the people who do those things have an easier time affecting anything. For instance, an initially uncaring society is more likely to come to care about animal welfare if more of its members find themselves empathising with animals in spite of common norms, or if that minority is respected more, or at least has more ways to barrage people who disagree with them with videos that might change their feelings.
This says nothing about the direction of change however. It isn’t obvious whether more or less change is good, or whether there are many directions change happens in, or how many of them are good. And perhaps we can say something more specific about what kinds of feelings or independent moral thought helps?
5. Separating moral feelings and moral positions
My guess is that thinking about ethics instead of acting directly on ethical feelings is usually good. Even if you think ethical feelings are a good basis for decisions, thinking about ethics seems useful because it tends to take a bunch of feelings related to different situations as inputs, and look for consistent positions across a range of questions. My guess is that if there are some ethical views that you would endorse after much thought, this method gets more information about them out of your ethical feelings than acting on each ethical feeling in a one-off fashion does.
I might be failing to think of kinds of ethical thought that people do. The ones I’m familiar with seem to focus on trying to come up with general principles that unite a bunch of moral feelings (including feelings about how morality should involve general principles and not depend on arbitrary things like spatial coordinates).
6. Having better moral feelings
There are probably lots of ways to go about getting unusual moral feelings. You can pick them up from other cultures, or make unusual conceptual associations, or take drugs, or have some sort of weird morally relevant synesthesia. So I wonder if you can disproportionately try to cause aberrant moral feelings that are useful for moral progress. My guess is yes, but before discussing that, let’s consider common ways moral feelings do change.
First, I wonder if anomalous feelings are often just from changing which group your moral feelings are trying to conform with. If you begin to think of foreigners or women or animals as being in your social sphere, and you imagine that they don’t approve of being treated badly in certain ways, then you come to think treating them badly is immoral just by the usual process of conforming with local moral consensus.
Another kind of moral feeling seems to come from generalizing moral feelings you already have. For instance, if you have a strong sense that pain is bad, and also a sense that it is ok to whip people as punishment, then you watch someone getting whipped and see that it involves pain, you probably end up with some conflicting feelings. And perhaps if you grew up away from people getting whipped, so you have unusually weak feelings about whether it should be allowed, your sense that causing pain is wrong might win out, where it didn’t for other people in your society. So that’s another way you might end up with unusual moral feelings.
I think there are a large class of cases like this where people have moral feelings about the badness of internal states like suffering or indignity, and moral feelings about it being ok to take certain external actions, but where the external actions cause the internal states for someone else. For instance, it might feel wrong for innocent people to live in destitution and danger, and also it might also feel right to be able to control who enters one’s country. And both of these might be prevalent views. Which feelings you end up having about the overall issue of refugee quotas is then not very determined. I think in situations like this people often have unusual feelings relative to people around them because they are in a slightly unusual position—for instance, one where refugees are unusually salient.
7. A specific suggestion for having better moral feelings
I propose that a good way to have novel and useful moral feelings is to try to experience the situations and feelings of the people involved in the relevant situation, in accurate proportions. For instance, if you are making decisions about animal welfare, I expect your feelings to be different to most people’s, and also to more accurately track the ethical views you would want to have, if you have interacted with distressed chickens, and happy chickens, and competing farm-owners, and people who do somewhat better on a meat-based diet, and have spent as much more time with the chickens than the farm owners as is appropriate to their scale.
Sometimes it is possible to experience the interests of one side much more strongly than the other. For instance, you might one day be able to see that a genetically modified person is well off, but it will be harder to really experience the badness of playing God. So the proposed heuristic for honing moral feelings might seem inherently utilitarian, in that it only accounts for the feelings of conscious entities. I don’t think that’s true though. You can still set out to experience the things that might most viscerally elicit the feeling of badness of playing God. I can’t actually think of anything that would make me feel conflicted about playing God in the relevant way, so maybe I should find out what makes someone else feel bad about it, at least before I play God. My guess is that there are situations that will make me feel more uneasy about playing God, and I’m suggesting that I will have better moral feelings in expectation if I try to actually viscerally experience those.