Conversation with Paul Christiano on Cause Prioritization Research

Paul ChristianoI talked to Paul Christiano about his views on cause prioritization as part of a ‘shallow investigation’ (inspired by Givewell) into cause prioritization which will be released soon. Notes from the conversation are cross-posted here from the 80000hours blog, and also available in other formats on my website. Previously in this series, a conversation with Owen Cotton-Barratt on GPP.



Paul Christiano: Computer science PhD student at UC Berkeley

Katja Grace: Research Assistant, Machine Intelligence Research Institute


This is a verbatim email conversation from the 26th of March 2014. Paul is a proponent of cause prioritization research. Here he explains his support of prioritization research, and makes some suggestions about how to do it.

Note: Paul is Katja’s boyfriend, so consider reading his inclusion as a relevant expert with a grain of salt.

Katja: How promising do you think cause prioritization is generally? Why?

Paul: Defined very broadly—all research that helps us choose what general areas we should be looking into for the best philanthropic impact—I think it is a very strong contender for best thing to be doing at the moment. This judgment is based on optimism about how much money could potentially be directed by the kind of case for impact which we could conceivably construct, but also the belief that there is a good chance that over the very long-term we can hope that the philanthropic community will be radically better-informed and more impactful (think many times) than it currently it is. If that’s the case, then it seems likely that a primary output of modern philanthropy is moving towards that point. This is not so much a story about quickly finding insights that let you find a particular opportunity that is twice as effective, and more a story of accumulating a body of expertise and information that has a very large payoff over the longer-term. I think that (not coincidentally) one can also give shorter-term justifications for prioritization vs. direct spending, which I also find quite compelling but perhaps not quite as much so.

Katja: Why do you think not enough is done already?

Paul: You could mean what evidence do I have that not enough is done, or what explanation can I offer for why not enough has been done even if it really is a good thing. I’m going to answer the second.

I think a very small fraction of philanthropists are motivated by a flexible or broad desire to do the most good in the world. So there aren’t too many people who we would expect to do this kind of thing. As a general rule there seems to be relatively little investment in expensive infrastructure which is primarily useful to other people, and relatively little investment in speculative projects that will take a long time and don’t have a great chance of paying off. I do think we are seeing more of this kind of thing in general recently, due to the same kinds of broader cultural shifts that have allowed the EA movement to get traction recently.

Katja: How much better do you think the very best interventions are likely to be than our current best guesses?

Paul: This kind of question is hard to answer due to ambiguity about “very best.” I’m sure there in some sense are very simple things you could do that are many orders of magnitude more cost-effective than interventions we currently support. So it seems like this really needs to necessarily be a question about investigative effort vs. effectiveness. In the very long-term, I would certainly not be surprised to discover that the most effective philanthropy in the future was ten or a hundred times more effective than contemporary philanthropy.

Katja: I believe you value methodological progress in this area highly. Is that true? What kind of methodological progress would be valuable?

Paul: There are a lot of ways you could go about figuring stuff out, and I expect most problems to be pretty hard without a long history of solving similar problems. Across fields, it seems like people get better at answering questions as they see what works and what doesn’t work to answer similar questions, they identify and improve the most effective approaches, and so on. This is stuff like, what questions do you ask to evaluate the attractiveness of a cause or intervention? Who do you talk to how much, and what kind of person do you hire to do how much thinking? How do you aggregate differing opinions, and what kind of provisional stance do you adopt to move forward in light of uncertainty? How confident an answer should you expect to get, and how should you prioritize spending time on simple issues vs. important issues? You could write down quite a lot of parameters which you can fiddle with as part of any effort to figure out “how promising is X?” and there are way more parameters that are harder to write down but inevitably come up if you actually sit down and try to do it. So there is a lot to figure out about how to go about the figuring out, and I would imagine that the primary impact of early efforts will be understanding what settings of those parameters are productive and accumulating expertise about how to attack the problem.

Katja: Why is it better to evaluate causes than interventions or charities?

Paul: I could give a number of different answers to this question, that is, I feel like a number of considerations point in this direction.

One is that evaluating charities typically requires a fairly deep understanding of the area in which they are working and the mechanism by which that charity will have an impact. That’s not the sort of thing you can build up in a month while you evaluate a charity, it seems to be the sort of thing that is expensive to acquire and is developed over funding many particular interventions. So one obvious issue is that you have to make choices about where to acquire that expertise and do that further investigation prior to being really equipped to evaluate particular opportunities (though this isn’t to say that looking at particular opportunities shouldn’t be a part of understanding how promising a cause is).

Another is that there are just too many charities to do this analysis for many of them, and the landscape is changing over time (this is also true for interventions though to a lesser extent). If you want to contribute to a useful collective understanding, information about these broader issues is just more broadly and robustly applicable. If you are just aiming to find a good thing to give to now this is not so much an issue, but if you are aiming to become better and better at this over time, judgments about individual charities are not that useful in and of themselves. Of course, while making such judgments you may acquire useful info about the bigger picture or make useful methodological progress.

My views on this question are largely based on a priori reasoning (and on all of these questions), which makes me very hesitant to speak authoritatively about them. But it’s worth noting that GiveWell has also reached the conclusion that a cause is a good level of analysis at least at the outset of an investigation, and their conclusion is more closely tied to very practical considerations about what happens when you actually try and conduct these investigations.

Katja: Can you point to past cause prioritization research that was high value? How did it produce value?

Paul: Three examples, of very different character:

  1. GiveWell has done research evaluating charities and interventions that has clearly had an effect at improving individuals’ giving and improving the quality of discourse about related issues, and have made relevant methodological progress. Labs is now working on evaluating causes and I think their current understanding has already somewhat improved the quality of discourse and had some positive expected impact on Good Ventures’ spending. The kind of story I am expecting is more long-term progress, so while I think good work will produce value along the way, I am very open to the possibility that most of the value is coming from gradual progress towards a more ambitious goal rather than improved spending this year.
  1. Many EAs have been influenced by arguments regarding astronomical waste, existential risk, shaping the character of future generations, and impacts of AI in particular. To the extent that we actually have aggregative utilitarian values I think these are hugely important considerations and that calling them to attention and achieving increased clarity on them has had a positive impact on decisions (e.g. stuff has been funded that is good and would not have been otherwise) and is an important step towards understanding what is going on. I think most of the positive impact of these lines of argument will wait until they have been clarified further and worked out more robustly.
  1. There is a lot of one-off stuff in economics and the social sciences more broadly that bears on questions about which causes are promising, even if it wasn’t directly motivated them—moreover, I think that if you were motivated by them and doing research in the social sciences or supporting research in the social sciences, you could zoom in on these most relevant questions. I’m thinking of economics that sheds light on the magnitude of the externalities from technological development, the impact of inequality, or determinants of good governance; or history that sheds light on the empirical relationship between war, technological development, economic development, population growth, moral attitudes etc.; or so on. One could potentially lump in RCT’s that shed light on the relationship between narrower interventions and more intermediate outcomes of interest. All of this stuff has a more nebulous impact on causing the modern intellectual elite to have generally more sensible views about relevant questions.

Katja: If new philanthropists wanted to contribute to this area, do you have thoughts on what they should do?

(if they wanted to spend $10,000?)

(if they wanted to spend $1M?)

Paul: If it were possible to fund GiveWell labs more narrowly that would be an attractive opportunity, and GiveWell seems like an alright bet anyway. Their main virtue as compared to others in the space is that they are on a more straightforward trajectory where they have an OK model already and can improve it marginally.

It seems like CEA has access to a good number of smart young people who are unusually keen on effectiveness per se; it seems pretty plausible to me that they will eventually be able to turn some of that into valuable research. I think they aren’t there yet (and haven’t really been trying) so this is a lot more speculative (but marginal dollars may be more needed). If it were possible to free up Nick Beckstead’s time with more dollars I would seriously consider that.

Katja: If you had money to spend on cause prioritization broadly, would it be better spent on prioritizing causes, more narrow research which informs prioritization (e.g. about long run effects of technological progress or effectiveness of bed nets), outreach, or something else? (e.g. other forms of synthesis, funding, doing good projects)

Paul: The most straightforwardly good-seeming thing to do at the moment is to bite off small questions relating to the relative promise of particular causes, and then do a solid job aggregating the empirical evidence and expert opinion to produce something that is pretty robustly useful. But there is also a lot of room for trying other things. Overall it seems like the most promising objective is building up the collective stock of knowledge that is robustly useful for making judgments between causes.

Katja: It is sometimes claimed that funders care very little about prioritization research, and so efforts are better spent on outreach than on research, which will be ignored. What do you think of this model?

Paul: I think that the number of people who might care is much larger than the number who currently do, and a primary bottleneck is that the product is not good enough. Between that and the fact that I’m quite confident there are at least a few million cause-agnostic dollars a year that seems sensitive to good arguments, I would be pretty comfortable contributing to cause prioritization. Outreach might be a better bet, but it’s certainly less certain, and my current best guess is that it’s less effective for reaching the most important people than building a more compelling product.

Imaginary queues

I thought of an interesting idea, then searched to see if anyone had done it. It seemed like not, so I wrote the below post. Then I looked once more, and found a few instances (finding one makes it easier to find others). I still think the proposal is good, and I have not much idea whether anyone doing it competently. However it is not so novel as hoped, and one must wonder why such things have not been successful enough for me to have heard of them (even after trying unusually hard to hear about them).

Instead of modifying the post, Robin Hanson suggested I put it up more or less as it was before I knew such a thing existed, and then I can compare the details of my suggestion to the details of systems that are real (or real minus any widespread adoption). This can make for a rare test of how different things look if you have an actual business providing a service, versus a daydream about an actual business providing a service. This seems like a good idea to me, so here you have it. The changes I made were adding this part, and removing a half-written couple of sentences at the end about how there didn’t seem to be anyone doing this, though there are some related things. I mean to compare it with a real system another time.


Consider how much time people spend in queues. Here is a proposal for reducing that dramatically, with a smartphone app.

The app controls a virtual queue. You tell your phone you want to join the queue. You continue shopping or whatever. Your phone pings you when you are near the front of the queue. You wander over and get in the physical line just before it is your turn.

There are many details to iron out here, but there seem to be plausible solutions:

  1. How can people smoothly use this when there will be a large fraction of people who don’t have it?
    In the simplest case, places that have multiple queues could have one dedicated one. Where this is not practical, the owner of the queue might have a smartphone or tablet at the front running a different version of the software which would allow others to add their names to the queue – in which case they can stand around as usual until called – or their phone numbers and a time estimate, in which case they can also wander a bit.
  2. How do you know how long the queue is?
    The app could trivially tell you this. Ultimately, it could also probably give you better stats on how much time it will likely take to get through it.
  3. How does the person serving the queue know how long the queue is? The queue owner has their own version of the app which shows them the current queue. This allows them to tell the app when the next person is up, make judgements about staff allocation, resolve any disputes about who’s turn it is. The same device might be used to add people to the queue who don’t have the app.
  4. How does the app know whether you have taken your turn?
    You get a final ping, telling you it is your turn. You respond to it by saying that you are doing so, or not. If you say not, then someone else is immediately called up, so if you do it wrong, it will probably be clear. The queue owner also has control of the queue, so if anything confusing happens, they can ask who you are, and tick off the right person on their app.
  5. What if you don’t get there in time?
    You could be moved to the next position in line a couple of times before being pushed out of it entirely.
  6. How does your phone know how long you need to get to the queue? Or do you just have to be within a few minutes of the queue all the time, for instance?
    In the simplest version, having a blanket few minute warning would be plenty useful in many cases. Google now already tells you when you should be leaving to get to a place though, so I assume similar functionality is feasible in the long run.
  7. If you can’t go that far away, would this really be so useful?
    Take the supermarket case. In the worst case, it is probably more amusing to wander around nearby than stand in a queue. In more optimistic but fairly plausible cases, you can do your important shopping then get in queue, then do any more discretionary shopping, and thereby save time. More optimistically, if you come to trust the time estimates, you could start your shopping even after you got in line, and only be not that far away at the very end.
  8. Why would anyone to take this up?
    In the long run, this system seems likely to be much better to me. But in the short run, it might  be confusing or annoying to customers or staff, or not be very smooth, or doing things differently might just be risky in general. There are some inducements to take it up at the start though. Obviously, it might quickly work well, look innovative, and be much less annoying than a queue. Also, at the start, it should cause more customers to wander around your shop while they wait, probably encouraging them to buy more things from you. If you have multiple queues, you might avoid the downside from people feeling forced to do a new thing, and also be able to more gently encourage them to download the app, as they see an inviting looking empty queue. I think another good case is probably restaurants. Many restaurants currently try to have a system like this, using a pen and paper, and people wandering back pretty often and checking in, or sitting around in the sidewalk. Often the queue takes a long time, so it is valuable to be able to leave. In this situation, I think many people would be happier to just get a ping when it is their turn, especially if in the meantime they can see roughly how long the wait is without walking back. In general, queues are pretty annoying already, so it seems it shouldn’t be that hard to be less annoying.
  9. How do you know who is really first if there is someone already there?
    If you get a ping when it is your turn, this should be clear.
  10. Can you join the queue way early, then wander in whenever you feel like it?
    If, when you miss your place, you are given too large a window to come back in, then you might have many people who could just turn up at any point, making it hard to know the real length of the queue. Thus you may want to avoid this. Though if you have a fairly good sense in practice of how often such people show up, it might not be that bad.
  11. Is it too annoying for people to download an app, and subsequently open it?
    Again, it seems best if this kind of thing starts in contexts where there are multiple queues. Also, even if downloading an app is annoying, remember that your alternative is standing in a queue for ten minutes. Standing boredly in a queue also seems like a pretty likely context for someone to download an app, especially an app that will allow them to not do that again. Alternatively, you could partner with a pre-existing app running related services.
  12. Probably many others that don’t strike me at this moment.

In the long run, there would be other useful services you could add. People could get in line whenever, and the app could tell say when to leave home. They could say ahead of time when they want to get to the front of the line, and be added at roughly the right time. The app could support purchase of better places in line from willing traders. You could naturally have good data on traffic, allowing better provision of services.

Overall, it just seems unlikely to me that the best wake to keep track of a list of people in these modern times is to physically line them up.

Owen Cotton-Barratt on GPP

I interviewed Owen Cotton-Barratt about the Global Priorities Project, as part of a ‘shallow investigation’ (inspired by Givewell) into cause prioritization which will be released soon. Notes from the conversation are cross-posted here from the 80000hours blog, and also available in other formats on my website.



  • Owen Cotton-Barratt: Lead Researcher, Global Priorities Project
  • Katja Grace: Research Assistant, Machine Intelligence Research Institute


This is a summary made by Katja of points made by Owen during a conversation on March 24 2014.

What the Global Priorities Project (GPP) does

The Global Priorities Project is new, and intends to experiment for a while with different types of projects and then work on those that appear highest value in the longer term. Their work will likely address questions about how to prioritize, improve arguments around different options, and will produce recommendations. It will probably be mostly research, but also include for instance some policy lobbying. They will likely do some work with concrete policy-relevant consequences and also some work on general high level arguments that apply to many things. Most features of the project are open to modification after early experimentation. There will be principally two audiences: policy makers and philanthropists, the latter including effective altruists and foundations. GPP has some access to moderately senior government and civil service policy people and are experimenting with the difficulty of pushing for high impact policies.

Research areas

Research topics will be driven by a combination of importance and comparative advantage. GPP is likely to focus on prioritizing broad areas rather than narrower interventions, though these things are closely linked. It is good to keep an eye on object level questions to ensure that you are thinking about things the right way. Owen is interested in developing frameworks for comparing things. This can produce value both in their own evaluations and through introducing metrics that others want to use, and so making proposals more comparable in general.

Work so far

Unprecedented technological risks

GPP has a draft report on unprecedented technological risks. They have shown it to several people involved in policy and received positive feedback. Somebody requested a stack of printed copies for their office, to hand out to people.

How to evaluate projects in ignorance of their difficulty

Owen is working on a paper about estimating returns from projects where we have little idea how difficult they are. Many research tasks seem to fall into this category. For instance, ‘how much money we should be putting into nuclear fusion?’ We have some idea of how good it would be, and not much idea of how hard it is. But we are forced to make decisions about this, so we make an implicit statement about likelihoods. But while it is implicit, we may get it wrong sometimes.

Short term changes

In the short time GPP has existed, it has moved to focus less on policy, because experimenting with others things seemed valuable.

Views on methodology

Long run effects

It has been suggested by others that research on long run consequences of policies is prohibitively difficult. Owen believes that improving our predictions in expectation about these long run consequences is hard but feasible. This is partly because our predictions are currently fairly bad.

There are already some informal arguments in the community surrounding GPP about long run implications which it would be good to write up. For instance, there is an argument that human welfare improvements will tend to be better than animal welfare improvements in the long run, because the former have benefits which compounds over time, while animal welfare does not appear to. This is a good case where short term benefits predictably decouple from long term benefits, while in other cases short term benefits may be a reasonable proxy.

GPP will likely focus on long run effects to some extent, but not solely. Owen believes they are very important. However he also thinks routes to impact involve bringing people on board with the general methodology of prioritization, and more speculative research is less persuasive. He thinks people interested in prioritization tend to think long run impacts dominate the value of interventions, but focusing there too strongly will cause others to write us off. He also thinks that we will ultimately need to use some short-term proxies for long term benefits.


Owen is in favor of a relatively high degree of quantification in this kind of research. However this has caveats, and he advocates awareness of possible dangers of quantification. We can be too trusting of the numbers produced in this way. We should be careful about models. Sometimes it is better to throw up our hands and say ‘we don’t know how to model this’ and give some qualitative considerations than proceeding with a bad model. For long term effects, we are at that stage where the best quantified models may be worse than qualitative arguments. However we should be working toward quantification. One benefit of quantification is that it improves conversations and truth seeking. Even before you know how to model a process, if you make explicit models then you can have explicit conversations with people about the models, and about what’s wrong with them or not.

Risks with quantification

Quantification is a natural tool if we want to make comparisons. If we want a shallow picture of what is going on, it is not clear that quantification will be useful.

Trying to break down intuitions into further details can make things worse. You can miss out factors and be unaware of the omission. You can pay too much attention to factors because you put them in your model, or disregard factors because you didn’t. You can confuse people into thinking you are more confident than you are. You can be duped by thinking you have a formula. Many people are quite bad at quantification, which makes it worse to advocate it in general. Then there are simple time costs: explicit quantification is time consuming.

Nonetheless, for questions we are interested in, Owen thinks it is important to try.

Methodological change and progress

GPP hopes to make methodological progress that will be applicable to any decisions. For instance their current work on evaluation under uncertainty about costs arose from their own work on unprecedented technological risks. After they have developed a general methodology there, they can try to apply it to further problems. Back and forth between concrete prioritization and abstract general questions is likely to characterize the work.

It seems generally useful when looking at more high level questions to pay attention to concrete cases, to check that your thinking is applicable and reasonable there.


The project currently uses much of Owen’s time and a small amount of several others’ time, perhaps summing to around one and a half full time people. It’s hard to estimate, because some of the meetings that involve other people would probably occur if GPP didn’t exist, under another project. Having a label probably makes somewhat more of these things happen. Niel Bowerman has has been putting a nontrivial fraction of his time into it, but he will be cutting back to work on outreach.

The expenses of the organization are largely about one person’s worth of employment, plus some overheads in terms of office rent and sharing administrative staff. Some of the work comes from people being employed by FHI.

Where is the value of cause prioritization in general?

Owen is optimistic about cause prioritization, because it is neglected, and obviously important.

Current best guesses vs. best

Owen thinks there is quite a large range of cost effectiveness between different things, but not absolutely enormous.

Finding new best interventions vs. marginally improving a lot of good spending

There are different routes to impact with cause prioritization. Owen thinks a major route to impact is through laying bricks of prioritisation methodology. This will help people in the future to do better prioritisation (and could be better than anything we manage in the near-term future).

Among direct effects on funding allocation, there are also substantially different kinds of impact you might hope for. You can uncover new very high impact interventions, and do them. Or you can just get a group of people who are currently doing quite good things with their money to do better things with their money. Owen is slightly more optimistic about the latter, but fairly uncertain.

Object vs. meta level research

Prioritization work should be focused in the short term on a mixture of object level research output and methodological progress. GPP’s time will be split fairly evenly between them, perhaps leaning toward the methodological. It can be hard to work on methodology without engaging with more concrete questions.

Why do others neglect cause prioritization?

Owen’s best explanations for the neglect of cause prioritization research in general are that it’s hard, that it’s hard to evaluate, and that academic incentives for research topic choice are not socially optimal. Also, like most research, its costs are concentrated while its benefits are distributed.


The term ‘cause prioritization’ seems suboptimal to Owen, and also to others. Sometimes it is good, but it is used more broadly than how people have traditionally thought of ‘causes’ and confuses people. It is also confusing because people think it is about causation. Owen would sometimes talk about ‘intervention areas’. He doesn’t have a good solution in general, but thinks we should be more actively looking for better terms.

Routes to contributing to prioritization

Thoughts on other organizations

Overall Owen thinks the entire area is under-resourced, so it’s great when other people are working on it. Even unsuccessful work will be valuable as it helps us to learn what works.


Owen thinks GiveWell Labs is laying a lot of useful groundwork for prioritization work. The ‘shallow investigations’ they have been focusing on so far have their value in aggregating knowledge about causes, by researching the funding landscape, who is working on problems, and what is broadly being done. This knowledge base can then be used by anyone who is thinking about cause prioritization, whether in GiveWell Labs or outside. So there are big positive externalities from making the in-progress research public.

GiveWell Labs haven’t yet turned this broad knowledge of existing work into comparisons between areas or prioritization between them. Owen is keen to see what their approach will be.


Leverage are probably doing some prioritization research, which may be very valuable. So far, however, they haven’t published much. Owen would love to see more of their analysis. Communicating is a cost, but not communicating bears the risk that research will be duplicated elsewhere or that things they discover won’t be built upon.


Owen is a big fan of the work the CCC does. They essentially represent expert economic opinion on global prioritisation.

There are a few reasons not to simply use their conclusions. The cost-benefit analysis which underlies most of their recommendations can in some cases miss important indirect effects. They don’t have a methodology which is strong at evaluating speculative work. And their recommendations are from the stance of global policy, which may not be directly applicable to altruists or even national policy-makers. However, their work remains one of the best resources we have today.

Funding GPP

GPP would welcome more funding. It would spend additional money securing the future of the project and hiring more researchers. It’s not clear how hard it is to attract good researchers, as they do not have the funds to hire another person yet, so have not advertised. At the moment money is the limiting factor in scaling up this research.

They would hire people who would similarly work on a variety of small-scale projects which seem important. According to skills they might work more directly on research or on using the research to leverage additional attention and work from the wider community. They would also be interested in hiring someone with more policy expertise. Owen has looked at this a bit, but it is probably not his comparative advantage.

Other conceivable projects

Influencing foundation giving

There are projects to get foundations to share more of their internal research, such as Glasspockets and IssueLab. Since small amounts of prioritization are done inside foundations, one could try to get these sharing efforts to focus more on sharing prioritization research. This sort of project has occurred to Owen in the past, but since these projects (e.g. Glasspockets) are already doing something like this, it doesn’t seem that neglected, so he thought it is probably not the high impact opportunity. Also, what the foundations are doing is likely not what we are thinking of when we say ‘cause prioritization’. They pick an area to focus on, then sometimes try to prioritize based on cost effectiveness within that area.

In response to the suggestion that it is best to focus on getting funders to care about prioritization, Owen thinks that may be true one day, but we first need higher quality research to be persuasive.

Another approach to influencing foundation giving is to get people who think about prioritization the right way into the foundations.

Influencing academia

It might be valuable to try to get cause prioritization taken up within academia, and seen as an academically respectable thing to do. This would help both with making the conclusions look more respectable, and in getting more brainpower from the class of people who would like to work at universities.

Who else does things like this?

The economics profession

We should think of academic economics as a part of the reference class of cause prioritization. A lot of economics focuses on long term effects of things. Economists would think of themselves as the experts on how to prioritize things, fairly justifiably. They have a lot of knowledge, which Owen tries to be broadly familiar with.

Owen thinks despite the fact that economists do a lot of relevant work, they tend not to actually produce prioritization of causes. So there may be a large backlog of relevant work to use in prioritizing causes. Owen has some idea of the landscape, though an imperfect one. He thinks it would be great to get more economists working in the area.

Doing good in a very noisy world

Suppose you live in a world where every time you try to do something good, it gives rise to such a giant waterfall of side effects that half the time the net effect of your actions is bad, and half the time it is good but largely from sources you didn’t anticipate. Also suppose that the analogous thing would happen if  you tried, hypothetically, to do bad things.

It sometimes seems plausible that we do live in such a world, and this sometimes makes it seem that doing good is a hopeless affair.

However I propose that in the most plausible worlds like this, when you try to do good things, in expectation you do a bit of good, and the good is merely overwhelmed by a vast random term, with expected value zero. In which case, even though your actions cause net bad half the time, they have positive expected value, and are about as good in expectation as you thought before considering the side effects. Is that so hopelessness inducing?

If so, consider a related scenario. Every time you do anything, it has exactly the desired consequences, and no others of importance. Except that it also causes a random number generator to run, and add or subtract a random amount of utility from the world, with expected value zero. Does this seem hopeless, or do you just ignore the random number generator, and do good things?

If our world is very noisy like this, is the aforementioned model a good description?

Inspired by a conversation with Paul Christiano, in which he said something like this.

Intuitions and utilitarianism

Bryan Caplan:

When backed into a corner, most hard-line utilitarians concede that the standard counter-examples seem extremely persuasive.  They know they’re supposed to think that pushing one fat man in front of a trolley to save five skinny kids ismorally obligatory.  But the opposite moral intuition in their heads refuses to shut up.

Why can’t even utilitarians fully embrace their own theory? 

He raises this question to argue that ‘there was evolutionary pressure to avoid activities such as pushing people in front of trolleys’ is not an adequate debunking explanation of the moral intuition, since there was also plenty of evolutionary pressure to like not dying, and other things that we generally think of as legitimately good. 

I agree that one can’t easily explain away the intuition that it is bad to push fat men in front of trolleys with evolution, since evolution is presumably largely responsible for all intuitions, and I endorse intuitions that exist solely because of evolutionary pressures. 

Bryan’s original question doesn’t seem so hard to answer though. I don’t know about other utilitarian-leaning people, but while my intuitions do say something like:

‘It is very bad to push the fat man in front of the train, and I don’t want to do it’

They also say something like:

‘It is extremely important to save those five skinny kids! We must find a way!’

So while ‘the opposite intuition refuses to shut up’, if the so-called counterexample is persuasive, it is not in the sense that my intuitions agree that one should not push the fat man, and my moral stance insists on the opposite. My moral intuitions are on both sides.

Given that I have conflicting intuitions, it seems that any account would conflict with some intuitions. So seeing that utilitarianism conflicts with some intuitions here does not seem like much of a mark against utilitarianism. 

The closest an account might get to not conflicting with any intuitions would be if it said ‘pushing the fat man is terrible, and not saving the kids is terrible too. I will weigh up how terrible each is and choose the least bad option’. Which is what utilitarianism does. An account could probably concord more with these intuitions than utilitarianism does, if it weighed up the strength of the two intuitions instead of weighing up the number of people involved. 

I’m not presently opposed to an account like that I think, but first it would need to take into account some other intuitions I have, some of which are much stronger than the above intuitions: 

  • Five is five times larger than one
  • People’s lives are in expectation worth roughly the same amount as one another, all else equal
  • Youth and girth are not very relevant to the value of life (maybe worth a factor of two, for difference in life expectancy)
  • I will be held responsible if I kill anyone, and this will be extremely bad for me
  • People often underestimate how good for the world it would be if they did a thing that would be very bad for them.
  • I am probably like other people in a given way, in in expectation
  • I should try to make the future better
  • Doing a thing and failing to stop the thing have very similar effects on the future.
  • etc.

So in the end, this would end up much like utilitarianism.

Do others just have different moral intuitions? Is there anything wrong with this account of utilitarians not ‘fully embracing’ their own theory, and nonetheless having a good, and highly intuitive, theory?