For whom should recommendations be effective?

Suppose you are in the business of making charity recommendations to others. You have found two good charities which you might recommend: 1) Help Ugly Children, and 2) Help Cute Children. It turns out ugly children are twice as easy to help, so 1) is the more effective place to send your money.

You are about to recommend HUC when it occurs to you that if you ask other people to help ugly children, some large fraction will probably ignore your advice, conclude that this effectiveness road leads to madness, and continue to support 3) Entertain Affluent Adults, which you believe is much less effective than HUC or HCC. On the other hand, if you recommend Help Cute Children, you think everyone will take it up with passion, and much more good will be done directly as a result.

What do you recommend?

The economy of weirdness

It is often said that you should spend your weirdness budget wisely. You should wear a gender-appropriate suit, and follow culture-appropriate sports, and use good grammar, and be non-specifically spiritual, and support moderate policies, and not have any tattoos around either of your eyes. And then on the odd occasion, when it happens to come up, you should gather up your entire weirdness budget and make a short, impassioned speech in favor of invertebrate equality. Or whatever you think is the very most effective use of weirdness. In short: you only get so much weirdness, so don’t use it up dressing like a clown or popularizing alternative sleep schedules.

While I agree the oddball activist will often get less airtime than her unassuming analog, and that weirdness is often a cost, the issue seems more complex. Let us better explore weirdness budgeting.

Model #1: Weirdness is badness

A first simple model is that people don’t like weird things, so if you have any, they will like you less in expectation. Weirdness is a kind of badness. On this model, I suppose the reason you would want to be weird at all is that you just are weird, and it is hard or unpleasant to keep it under control.

Some characteristics are certainly like this. For instance, being shockingly unable to open corkscrews, or tending to fart really loudly. These are just bad characteristics though, and don’t seem like they need to be budgeted differently from other bad but not weird characteristics, like being lazy and stupid. I don’t think this is what people have in mind when they say to spend your weirdness budget wisely.

Model #2: Weirdness is rarity is bad

Here is a closely related model. Weird traits are not inherently bad, but they are inherently unusual, and being unusual is inherently bad. On this model, the reason you want to have a weird trait could be that you like the trait, and so you want to make it less unusual.

If many people feel that way, then on this model, weird traits are tragedies of the commons. e.g. If everyone could be naked in the street, the world would be a better place for everyone. But sadly, because nobody does it, anyone who starts is socially punished. So it is only the very altruistic person who will pull off their pants and be ostracized for the common good.

Model #3: Weirdness among the cool kids is bad

This is like the last model, but explains why you would want to budget your weirdness. In it, it doesn’t matter how common a trait is, it matters how common it is among cool people (or perhaps how differentially common it is among cool people). So then you don’t want to help popularize too many weird traits, because the more weird traits you have the less cool you seem, and thus the less your vote in favor of those traits counts.

I think there is a hint of truth to these models so far. Kinds of unusualness are inherently bad, unusualness is often bad, and having traits makes those traits less unusual among people like you. However I highly doubt that people are mostly weird out of altruism, or even altruism combined with inability to control their weirdness. People love being weird. (Often.)

Model #4: Weirdness is divisive

Some weird traits are unambiguously bad. Some are unambiguously good, and empirically, these don’t appear to use up weirdness budget. If you are weirdly hilarious this probably means you can get away with more other weirdness, not less.

Many traits a bit good and a bit bad: they please some people while scaring off others. If a trait is ‘weird’, probably it displeases most people, and appeals to few. But this isn’t necessarily a bad deal, even from a selfish perspective.

For one thing, it might please the few a lot. Being into 15th Century East Asian architecture will seem merely not that interesting to the vast majority of people, while exceptionally exciting to the few who share your interest.

For another thing, it matters how much you care about different levels of liking. For many circumstances, the big value is in having everyone think you are basically ok. If you are widely considered basically ok, you can be trusted on routine issues, you can have a job, you can have friends, you can be taken seriously. If you are basically ok and have one weird opinion, you can be a datapoint suggesting that weird opinion is ok for basically ok people to have.

However if you want people to buy your book, or change continents to live with you, or fund your experimental research organization, then you need some people to really like you. But luckily, you don’t need that many. And when the bar is high, and you only need to meet it a few times, you want high variance. If you can pick up a trait that 90% of the population dislikes, but the remainder likes, you might take it. Because ten percent of people liking you can be way better than everyone being indifferent. And then you might do it again, and again. Until eventually, you marry the last person and ignore the rest.

Of course, there are also traits that 60% of people are indifferent to, and 40% of people love, and these are a better deal, and you should start there, all things equal. But there are many other reasons to have particular traits, e.g. you already have them, and it would be effort to hide or destroy them. Generally, it is easy for a trait you want to have for other reasons to be positive value on social grounds in spite of being weird and seeming bad to many people.

Causes and policy views tend to fit in this ‘divisive’ category. If you advocate for abolishing the minimum wage, some people will love you more, and some people will hate you more. Causes are often political, which means that which people like you more and which people hate you more is correlated between them. This would make spending a bunch of weirdness an even better deal. Once you have advocated for abolishing the minimum wage, if you mostly care about some people liking you a lot, you may as well go on to support a slew of other free market policies, because the same people as liked you the first time will like you more, instead of you losing half of them at every step.

Model #4.1: Weirdness is divisive, the goal is spreading weird traits

So far we assumed you wanted to be liked or taken seriously a certain amount by other people. What if we suppose you have a set of weird traits you are in favor of, which you may choose to express or not, and your primary goal is to spread them? (As described in #2). For instance, suppose you care a lot about animal suffering, and also the far future, and think cryonics should be much more common, and think public displays of affection should be normal, and that polyphasic sleep is a thing everyone should try.

As described in #4, variance gets you smaller numbers of people who feel more positively toward you, and sometimes this is worth it. For instance, if nobody will take any of your ideas seriously unless they think you are incredibly impressive. There are a couple of important features specific to the ambition of spreading weird traits however.

One is that to spread a weird trait, you generally have to have it, or associate yourself with it somehow. That potentially makes expressing more of your traits better, aside from its effect on how well respected or liked you are. Suppose you want people to agree with you on cryonics and the far future. Then even if talking about both topics reduces much people are willing to listen to you, it might be worth it because now your small remaining group of admirers think about twice as many topics you want them to think about. This assumes they don’t just reduce their attention to your first topic proportionally.

Note that the incentives here are different for narrowly directed advocacy organizations and their members. You might do best advocating for whales and bad haircuts, but your whale organization would strongly prefer you just stick to the whales.

Another feature of the divisiveness model when you care about spreading traits is that people disliking you has particularly negative effects when you are trying to spread traits. Often, causing half of humanity to mildly dislike you is not so bad, because it will just mean you don’t interact with them on a personal basis much, and you weren’t that socially ambitious anyway. However when people dislike you they will often associate your particular traits with dislike. It might still be worth trading some people disliking you for others liking you extra, but this consideration makes such trades worse than they would have been.

Model #5: Weirdness is local

It could be that most of what matters is weirdness relative to those around you, and that different groups find different things weird, and that you can change who is around you. This picture seems true for some kinds of traits, such as a weird sense of humor. In this case, you can either explicitly search for your people, or just act as you want to in the long run, scare away those who find it weird, and be left with a suitable group. In this model, being weird in a specific way has a one-time (though perhaps large and drawn out) cost, and then you can do it for free, forever. So in this model the wisest way to spend your so-called weirdness budget might be fast and completely.

Model #6 Weirdness as a signal

If weirdness is just a generic bad sign, or is a sign that you match with some groups of people or others, earlier models will perhaps suffice. But being weird often suggests other specific things about a person.

As soon as being weird is probably a bad option, then it also becomes a sign of lack of awareness, or self-control. For instance, if someone wears a ripped shirt to a job interview, one probably infers that they are clueless about customs, don’t own a nice shirt, or that have some other mysterious agenda that one probably doesn’t want to be involved with. These kind of signals lead to the basic situation described in model 2, where things are not intrinsically bad become so by virtue of being weird. However this means that you can be more weird in certain ways without using up weirdness budget if you counteract the signaling on its own. For instance, if you enter a job interview and say ‘I’m sorry that my shirt is torn—I actually got it caught on a shrubbery on my way in here’, then the interviewer will no longer infer  that you don’t know about social customs, though may infer that you were interacting unusually with a shrubbery.

Model #7: Weirdness is honest

The usual consequence of advice to be thrifty with weirdness is that people end up with a collection of views and interests that they keep hidden from the world. Sometimes this might be actively deceptive, for instance when people with unspeakable views claim to have no views. But mostly avoiding being weird is just implicit misrepresentation. This suggests a range of considerations associated with honesty in general. Honesty has virtues and costs.

The costs of honesty as they apply here are I think mostly covered above—if you have traits that are widely acknowledged as bad, or make you seem like someone you don’t want to be seen as, or whatever, it is costly to let them be seen. I think there are some benefits of honesty that haven’t fit under other above models however.

It’s more interesting to know about a relatively complete, ‘authentic’ person than a flat, disconnected one-issue front that an unknown person has chosen to erect. People are usually interested in hearing about people more than ideas, so if you present yourself as a person this will probably interest them more. And a person generally has an array of idiosyncrasies and unusual concerns, including some that are not the most effective thing to be concerned about, and some characteristics that everyone agrees are actively bad.

Relatedly, revealing a relatively full array of your views and interests means people know you better, which tends to improve your relationship with them. I’d guess this is true even for people who observe you from far away on the internet. I think I feel more sympathetic to an author who admits they have characteristics beyond an interest in the subject matter.

Another virtue of honesty is that if people see the larger picture behind the particular view you are espousing, your behavior will make more sense, so you will seem more reasonable and interesting. For instance, if you advocate for developing world aid for a while, and then suddenly change to advocating for space travel, you might seem flakey. Whereas if you say all along that you care about doing the most cost-effective thing, and are open minded about causes, and are considering a bunch of them on an ongoing basis, and explain why you think these different causes are cost-effective, then this might seem consistent instead of actively inconsistent. Relatedly, as your views evolve it seems more natural for those who were interested before to remain interested if they understand the bigger picture of your motives.

Relatedly, particular weird views will often make more sense in the context of your larger set of weird views. If you espouse cryonics on its own, and don’t mention that you also think it will be possible to upload human minds onto computers, the cryonics will seem much more ambitious than it otherwise would.

Then there is just the usual problem that dishonesty is confusing and tangly. Views on some topics strongly suggest views on other topics, so if topics are out of bounds, you have to make sure you don’t imply anything about them. This is probably much easier in practice than it first seems, because people are not great at drawing inferences. I wouldn’t be surprised if using abstract language was enough to successfully hide most controversial statements most of the time. However there are probably other things like this.

If you tell people what you really care about, you can have more useful conversations with them, because they can give feedback and suggestions that actually matter to you. For instance, if I spend most of my time thinking about how to improve my life, but I write as if all I care about is resolving puzzles in social science, then your comments can only help me with puzzles in social science.

It can feel better to be honest. However this might just be down to better relationships and avoiding the mental taxation associated with maintaining an inoffensive front.

This is not an exhaustive account of the virtues of weirdness as honesty. Also note that none of the benefits I mentioned apply strongly all of the time. They are just considerations that sometimes matter, and sometimes make it better to be pretty weird.

***

Ok, those are all of my models of weirdness for now, and of how it is appropriate to splurge/invest in it. I suspect at least many of them have some truth, and apply to varying degrees to various weirdnesses in varying parts of the real world. There are probably other important dynamics I have missed. Overall, I’m still not sure how weird it is good to be in general. It seems plausible that many people should be relatively weird across the board, rather than saving it all up for one issue. I suspect some people are best off being weird while others should be more normal overall, and it is harder to tell what is best on the current margin, where some people are weird and some are normal. My guess is that you should often treat weirdness differently depending on what you want to achieve (basic respectability? Fame? A boyfriend? A good relationship with your audience? A good relationship with your organization?), and the nature of the weirdness in question (How much do some people like it? How much do others not? Does it send specific signals? Is it just bad?).

AI Impacts

I’ve been working on a thing with Paul Christiano that might interest some of you: the AI Impacts project. The basic idea is to apply the evidence and arguments that are kicking around in the world and various disconnected discussions respectively to the big questions regarding a future with AI. For instance, these questions:

  • What should we believe about timelines for AI development?
  • How rapid is the development of AI likely to be near human-level?
  • How much advance notice should we expect to have of disruptive change?
  • What are the likely economic impacts of human-level AI?
  • Which paths to AI should be considered plausible or likely?
  • Will human-level AI tend to pursue particular goals, and if so what kinds of goals?
  • Can we say anything meaningful about the impact of contemporary choices on long-term outcomes?
For example we have recently investigated technology’s general proclivity to abrupt progress, surveyed existing AI surveys, and examined the evidence from chess and other applications regarding how much smarter Einstein is than an intellectually disabled person, among other things.
Some more on our motives and strategy, from our about page:

Today, public discussion on these issues appears to be highly fragmented and of limited credibility. More credible and clearly communicated views on these issues might help improve estimates of the social returns to AI investment, identify neglected research areas, improve policy, or productively channel public interest in AI. The goal of the project is to clearly present and organize the considerations which inform contemporary views on these and related issues, to identify and explore disagreements, and to assemble whatever empirical evidence is relevant. The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence. These posts are intended to be continuously revised in light of outstanding disagreements and to make explicit reference to those disagreements.

In the medium run we’d like to provide a good reference on issues relating to the consequences of AI, as well as to improve the state of understanding of these topics. At present, the site addresses only a small fraction of questions one might be interested in, so only suitable for particularly risk-tolerant or topic-neutral reference consumers. However if you are interested in hearing about (and discussing) such research as it unfolds, you may enjoy our blog. If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form. Cross-posted from Less-Wrong.

When should an Effective Altruist be vegetarian?

I have lately noticed several people wondering why more Effective Altruists are not vegetarians. I am personally not a vegetarian because I don’t think it is an effective way to be altruistic.

As far as I can tell the fact that many EAs are not vegetarians is surprising to some because they think ‘animals are probably morally relevant’ basically implies ‘we shouldn’t eat animals’. To my ear, this sounds about as absurd as if Givewell’s explanation of their recommendation of SCI stopped after ‘the developing world exists, or at least has a high probability of doing so’.

(By the way, I do get to a calculation at the bottom, after some speculation about why the calculation I think is appropriate is unlike what I take others’ implicit calculations to be. Feel free to just scroll down and look at it).

I think this fairly large difference between my and many vegetarians’ guesses at the value of vegetarianism arises because they think the relevant question is whether the suffering to the animal is worse than the pleasure to themselves at eating the animal. This question sounds superficially plausibly relevant, but I think on closer consideration you will agree that it is the wrong question.

The real question is not whether the cost to you is small, but whether you could do more good for the same small cost.

Similarly, when deciding whether to donate $5 to a random charity, the question is whether you could do more good by donating the money to the most effective charity you know of. Going vegetarian because it relieves the animals more than it hurts you is the equivalent of donating to a random developing world charity because it relieves the suffering of an impoverished child more than foregoing $5 increases your suffering.

Trading with inconvenience and displeasure

My imaginary vegetarian debate partner objects to this on grounds that vegetarianism is different from donating to ineffective charities, because to be a vegetarian you are spending effort and enjoying your life less rather than spending money, and you can’t really reallocate that inconvenience and displeasure to, say, preventing artificial intelligence disaster or feeding the hungry, if don’t use it on reading food labels and eating tofu. If I were to go ahead and eat the sausage instead – the concern goes – probably I would just go on with the rest of my life exactly the same, and a bunch of farm animals somewhere would be the worse for it, and I scarcely better.

I agree that if the meat eating decision were separated from everything else in this way, then the decision really would be about your welfare vs. the animal’s welfare, and you should probably eat the tofu.

However whether you can trade being vegetarian for more effective sacrifices is largely a question of whether you choose to do so. And if vegetarianism is not the most effective way to inconvenience yourself, then it is clear that you should choose to do so. If you eat meat now in exchange for suffering some more effective annoyance at another time, you and the world can be better off.

Imagine an EA friend says to you that she gives substantial money to whatever random charity has put a tin in whatever shop she is in, because it’s better than the donuts and new dresses she would buy otherwise. She doesn’t see how not giving the money to the random charity would really cause her to give it to a better charity – empirically she would spend it on luxuries. What do you say to this?

If she were my friend, I might point out that the money isn’t meant to magically move somewhere better – she may have to consciously direct it there. She might need to write down how much she was going to give to the random charity, then look at the note later for instance. Or she might do well to decide once and for all how much to give to charity and how much to spend on herself, and then stick to that. As an aside, I might also feel that she was using the term ‘Effective Altruist’ kind of broadly.

I see vegetarianism for the sake of not managing to trade inconveniences as quite similar. And in both cases you risk spending your life doing suboptimal things every time a suboptimal altruistic opportunity has a chance to steal resources from what would be your personal purse. This seems like something that your personal and altruistic values should cooperate in avoiding.

It is likely too expensive to keep track of an elaborate trading system, but you should at least be able to make reasonable long term arrangements. For instance, if instead of eating vegetarian you ate a bit frugally and saved and donated a few dollars per meal, you would probably do more good (see calculations lower in this post). So if frugal eating were similarly annoying, it would be better. Eating frugally is inconvenient in very similar ways to vegetarianism, so is a particularly plausible trade if you are skeptical that such trades can be made. I claim you could make very different trades though, for instance foregoing the pleasure of an extra five minute’s break and working instead sometimes. Or you could decide once and for all how much annoyance to have, and then choose most worthwhile bits of annoyance, or put a dollar value on your own time and suffering and try to be consistent.

Nebulous life-worsening costs of vegetarianism

There is a separate psychological question which is often mixed up with the above issue. That is, whether making your life marginally less gratifying and more annoying in small ways will make you sufficiently less productive to undermine the good done by your sacrifice. This is not about whether you will do something a bit costly another time for the sake of altruism, but whether just spending your attention and happiness on vegetarianism will harm your other efforts to do good, and cause more harm than good.

I find this plausible in many cases, but I expect it to vary a lot by person. My mother seems to think it’s basically free to eat supplements, whereas to me every additional daily routine seems to encumber my life and require me to spend disproportionately more time thinking about unimportant things. Some people find it hard to concentrate when unhappy, others don’t. Some people struggle to feed themselves adequately at all, while others actively enjoy preparing food.

There are offsetting positives from vegetarianism which also vary across people. For instance there is the pleasure of self-sacrifice, the joy of being part of a proud and moralizing minority, and the absence of the horror of eating other beings. There are also perhaps health benefits, which probably don’t vary that much by people, but people do vary in how big they think the health benefits are.

Another  way you might accidentally lose more value than you save is in spending little bits of time which are hard to measure or notice. For instance, vegetarianism means spending a bit more time searching for vegetarian alternatives, researching nutrition, buying supplements, writing emails back to people who invite you to dinner explaining your dietary restrictions, etc. The value of different people’s time varies a lot, as does the extent to which an additional vegetarianism routine would tend to eat their time.

On a less psychological note, the potential drop in IQ (~5 points?!) from missing out on creatine is a particularly terrible example of vegetarianism making people less productive. Now that we know about creatine and can supplement it, creatine itself is not such an issue. An issue does remain though: is this an unlikely one-off failure, or should we worry about more such deficiency? (this goes for any kind of unusual diet, not just meat-free ones).

How much is avoiding meat worth?

Here is my own calculation of how much it costs to do the same amount of good as replacing one meat meal with one vegetarian meal. If you would be willing to pay this much extra to eat meat for one meal, then you should eat meat. If not, then you should abstain. For instance, if eating meat does $10 worth of harm, you should eat meat whenever you would hypothetically pay an extra $10 for the privilege.

This is a tentative calculation. I will probably update it if people offer substantially better numbers.

All quantities are in terms of social harm.

Eating 1 non-vegetarian meal

< eating 1 chickeny meal (I am told chickens are particularly bad animals to eat, due to their poor living conditions and large animal:meal ratio. The relatively small size of their brains might offset this, but I will conservatively give all animals the moral weight of humans in this calculation.)

< eating 200 calories of chicken (a McDonalds crispy chicken sandwich probably contains a bit over 100 calories of chicken (based on its listed protein content); a Chipotle chicken burrito contains around 180 calories of chicken)

= causing ~0.25 chicken lives (1 chicken is equivalent in price to 800 calories of chicken breast i.e. eating an additional 800 calories of chicken breast conservatively results in one additional chicken. Calculations from data here and here.)

< -$0.08 given to the Humane League (ACE estimates the Humane League spares 3.4 animal lives per dollar). However since the humane league basically convinces other people to be vegetarians, this may be hypocritical or otherwise dubious.

< causing 12.5 days of chicken life (broiler chickens are slaughtered at between 35-49 days of age)

= causing 12.5 days of chicken suffering (I’m being generous)

< -$0.50 subsidizing free range eggs,  (This is a somewhat random example of the cost of more systematic efforts to improve animal welfare, rather than necessarily the best. The cost here is the cost of buying free range eggs and selling them as non-free range eggs. It costs about 2.6 2004 Euro cents [= US 4c in 2014] to pay for an egg to be free range instead of produced in a battery. This corresponds to a bit over one day of chicken life. I’m assuming here that the life of a battery egg-laying chicken is not substantially better than that of a meat chicken, and that free range chickens have lives that are at least neutral. If they are positive, the figure becomes even more favorable to the free range eggs).

< losing 12.5 days of high quality human life (assuming saving one year of human life is at least as good as stopping one year of an animal suffering, which you may disagree with.)

= -$1.94-5.49 spent on GiveWell’s top charities (This was GiveWell’s estimate for AMF if we assume saving a life corresponds to saving 52 years – roughly the life expectancy of children in Malawi. GiveWell doesn’t recommend AMF at the moment, but they recommend charities they considered comparable to AMF when AMF had this value.

GiveWell employees’ median estimate for the cost of ‘saving a life’ through donating to SCI is $5936 [see spreadsheet here]. If we suppose a life  is 37 DALYs, as they assume in the spreadsheet, then 12.5 days is worth 5936*12.5/37*365.25 = $5.49. Elie produced two estimates that were generous to cash and to deworming separately, and gave the highest and lowest estimates for the cost-effectiveness of deworming, of the group. They imply a range of $1.40-$45.98 to do as much good via SCI as eating vegetarian for a meal).

Given this calculation, we get a few cents to a couple of dollars as the cost of doing similar amounts of good to averting a meat meal via other means. We are not finished yet though – there were many factors I didn’t take into account in the calculation, because I wanted to separate relatively straightforward facts for which I have good evidence from guesses. Here are other considerations I can think of, which reduce the relative value of averting meat eating:

  1. Chicken brains are fairly small, suggesting their internal experience is less than that of humans. More generally, in the spectrum of entities between humans and microbes, chickens are at least some of the way to microbes. And you wouldn’t pay much to save a microbe.
  2. Eating a chicken only reduces the number of chicken produced by some fraction. According to Peter Hurford, an extra 0.3 chickens are produced if you demand 1 chicken. I didn’t include this in the above calculation because I am not sure of the time scale of the relevant elasticities (if they are short-run elasticities, they might underestimate the effect of vegetarianism).
  3. Vegetable production may also have negative effects on animals.
  4. Givewell estimates have been rigorously checked relative to other things, and evaluations tend to get worse as you check them. For instance, you might forget to include any of the things in this list in your evaluation of vegetarianism. Probably there are more things I forgot. That is, if you looked into vegetarianism with the same detail as SCI, it would become more pessimistic, and so cheaper to do as much good with SCI.
  5. It is not at all obvious that meat animal lives are not worth living on average. Relatedly, animals generally want to be alive, which we might want to give some weight to.
  6. Animal welfare in general appears to have negligible predictable effect on the future (very debatably), and there are probably things which can have huge impact on the future. This would make animal altruism worse compared to present-day human interventions, and much worse compared to interventions directed at affecting the far future, such as averting existential risk.

My own quick guesses at factors by which the relative value of avoiding meat should be multiplied, to account for these considerations:

  1. Moral value of small animals: 0.05
  2. Raised price reduces others’ consumption: 0.5
  3. Vegetables harm animals too: 0.9
  4. Rigorous estimates look worse: 0.9
  5. Animal lives might be worth living: 0.2
  6. Animals don’t affect the future: 0.1 relative to human poverty charities

Thus given my estimates, we scale down the above figures by 0.05*0.5*0.9*0.9*0.2*0.1 =0.0004. This gives us $0.0008-$0.002 to do as much good as eating a vegetarian meal by spending on GiveWell’s top charities. Without the factor for the future (which doesn’t apply to these other animal charities), we only multiply the cost of eating a meat meal by 0.004. This gives us a price of $0.0003 with the Humane League, or $0.002 on improving chicken welfare in other ways. These are not price differences that will change my meal choices very often! I think I would often be willing to pay at least a couple of extra dollars to eat meat, setting aside animal suffering. So if I were to avoid eating meat, then assuming I keep fixed how much of my budget I spend on myself and how much I spend on altruism, I would be trading a couple of dollars of value for less than one thousandth of that.

I encourage you to estimate your own numbers for the above factors, and to recalculate the overall price according to your beliefs. If you would happily pay this much (in my case, less than $0.002) to eat meat on many occasions, you probably shouldn’t be a vegetarian. You are better off paying that cost elsewhere. If you would rarely be willing to pay the calculated price, you should perhaps consider being a vegetarian, though note that the calculation was conservative in favor of vegetarianism, so you might want to run it again more carefully. Note that in judging what you would be willing to pay to eat meat, you should take into account everything except the direct cost to animals.

There are many common reasons you might not be willing to eat meat, given these calculations, e.g.:

  • You don’t enjoy eating meat
  • You think meat is pretty unhealthy
  • You belong to a social cluster of vegetarians, and don’t like conflict
  • You think convincing enough others to be vegetarians is the most cost-effective way to make the world better, and being a vegetarian is a great way to have heaps of conversations about vegetarianism, which you believe makes people feel better about vegetarians overall, to the extent that they are frequently compelled to become vegetarians.
  • ‘For signaling’ is another common explanation I have heard, which I think is meant to be similar to the above, though I’m not actually sure of the details.
  • You aren’t able to treat costs like these as fungible (as discussed above)
  • You are completely indifferent to what you eat (in that case, you would probably do better eating as cheaply as possible, but maybe everything is the same price)
  •  You consider the act-omission distinction morally relevant
  • You are very skeptical of the ability to affect anything, and in particular have substantially greater confidence in the market – to farm some fraction of a pig fewer in expectation if you abstain from pork for long enough – than in nonprofits and complicated schemes. (Though in that case, consider buying free-range eggs and selling them as cage eggs).
  • You think the suffering of animals is of extreme importance compared to the suffering of humans or loss of human lives, and don’t trust the figures I have given for improving the lives of egg-laying chickens, and don’t want to be a hypocrite. Actually, you still probably shouldn’t here – the egg-laying chicken number is just an example of a plausible alternative way to help animals. You should really check quite a few of these before settling.

However I think for wannabe effective altruists with the usual array of characteristics, vegetarianism is likely to be quite ineffective.

Seán Ó hÉigeartaigh on FHI and CSER

This is the last part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-BarrattPaul ChristianoPaul PenleyGordon Irlam, and Alexander Berger, and Robert Wiblin.

Nick Beckstead interviewed Seán Ó hÉigeartaigh on the Future of Humanity Institute (FHI) and the Center for the Study of Existential Risk (CSER). The notes are here.

sean