The landscape of altruistic interventions

Suppose you want to figure out what the best things to do are. One approach is to start by prioritizing high level causes: is it better broadly to work on developing world health, or on technological development? Then you can work your way downwards: is it better to work on treating infectious diseases or on preventative measures? Malaria or HIV? Direct bed-net distribution or political interventions? Which politician? Which tactic? Which day?

This should work well if the landscape of interventions is kind of smooth – if the best interventions are found with the pretty excellent interventions, which are in larger categories with the great interventions, etc. This approach might work well for finding a person who really likes hockey for instance. The extreme hockey lovers will be found with the fairly enthusiastic hockey lovers, who will probably ultimately be in countries of hockey lovers. It should not on the other hand work very well for finding the reddest objects in your house – the most red thing is not likely to be in the room which has the most overall red. Which of these is more similar to finding good altruistic interventions?

This method would work well for finding the reddest things in your house if the redness of things was influenced a lot by color of the lights, and you had very different colored lights throughout your house. Similarly, if most of the variation in value between different altruistic interventions comes from general characteristics of high level causes, we should expect this method to work better there. You might also expect it to work well if the important levels could be mixed and matched – if the best high level cause could be combined with the best generic method of pursuing a cause, and done with the best people. These things seem plausible to me in the case of altruistic interventions, but I’m not really sure. What do you think?

3 responses to “The landscape of altruistic interventions

  1. Didn’t Eliezer Yudkowsky recently post on Less Wrong postulating that broad economic growth has net negative expected utility, because it hastens the advent of non-friendly AGI without correspondingly hastening friendly AGI, thus adding to what he considers the asymptotically dominant existential risk to humanity’s values?

    Even if the argument seems somewhat implausible (e.g. you discount the threat of hostile AGI on anthropic grounds), surely its enough to establish that the landscape is probably non smooth.

    OTOH I have a friend who is trying I think to develop a concept of utilitarian “locality” – given our incredibly bounded information and cognitive resources, and the incredibly complexity of the universe and potential for black swans of all kinds to throw spanners into our works, in practice we can only satisfice, against relatively simplistic criteria. So he would probably argue the relevant landscape is smooth, or rather can be treated as smooth enough.

  2. Rather than work downwards, why not work upwards?

    Decide on what problems are the most troubling to -you- (at a low level — malaria, unemployment, etc), in your opinion, and then trace the causes of those problems. Then trace the causes of THOSE problems, and so on up the ladder.

    Eventually you will reach a root cause… and can work on fixing that.

    • Go too far up the ladder, though, and you often end up with something like “the root cause is the laws of thermodynamics”. Nothing at all can be done about that.


Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s