Interview with Cool Earth

An interview with the people of Cool Earth, a charity I investigated and ultimately recommended as a relatively good one while visiting GWWC last summer.

New website contents

Rob and Federico of GWWC have made a nice summary of metacharities, which might be useful to those of you interested in cause prioritization as a cause. For future reference, I’ve put in my online collection of useful things (just for useful things which need help with their web presence).

I’ve also put up a few ‘structured cases’ for various claims there. These are intended to be well organized collections of arguments and considerations regarding a particular question, such as ‘should I invest in opening US borders?’. The claims of interest are mostly about how resources can be used to make the world better. The arguments are in progress, and probably not in particularly good formats, nor complete. However I talk to people about these often, so it is good to have them online to point to.

I’ve also put up some puzzles and other things.

Discussions for information

Information and ideas percolate through society in many forms: via research papers, media publications, schools, advocacy campaigns, and perhaps most ubiquitously, private conversations.

Private conversations fulfill purposes other than information processing and transfer, so they cannot be expected to perfectly fulfill such roles, or even to fulfil them particularly well. They convey implicit information about the speakers and their qualities, they manifest social maneuvering, they embody humor and other good feelings, and they make a good circumstance for enjoying company more generally.

The information related roles that conversation can play most obviously include straightforwardly communicating information, and – in a more argumentative fashion – collaboratively figuring out what is true. Even if conversations are usually for other things as well, given that they are a major part of the social information  and dispersal system, one might wonder if they could do these jobs better when those roles are important. For instance, if you partake in ‘work’ conversations with the intention of productively progressing toward specific goals, should you be doing anything other than following your natural conversation instincts?

If you wanted to fulfill these information roles with conversations, here are some ways they seem to fall short in general:

  1. They are rarely recorded or shared usefully. Which is bad because it means they must be repeated anew by many different groups of people, not to mention exactly the same people.
  2. Relatedly, conversations seem to hardly build on one another over time. If I think of a good counterargument to your point, this won’t be available to any of the gazillion others having the same argument in the near future, because neither of us will do anything that makes it so, and both of us will probably forget all but the gist of the discussion by tomorrow. I posit that it is very hard to have a counterargument to a counterargument to a counterargument to an argument widely known, even when the argument and the counterargument are often repeated. Except when participants have a history or perhaps a shared subculture, each discussion basically starts the topic anew. A given discussion rarely gets through many considerations, and doesn’t leave the considerations it gets through in a state to be built upon by other conversations.
  3. Discussions are often structured poorly for analysis, though perhaps ok for information transfer. It is natural for a discussion to take a fairly linear form, because only one sentence can be said at a time. But topics being discussed often don’t fit well in this form. A given statement has many potential counterarguments, and lots of possible pieces of supporting evidence, or vantage points from which to analyze it. The same is true of each of those arguments or supports. So a person may make a statement, then another person may offer a  criticism, and so on for a few levels, at which point one of the people will ‘lose the argument’ because he has no retort. But that corner of the tree was not necessarily critically important to the truth of the initial claim. If at the first level a different argument had been pursued, or different evidence had been offered, a largely unrelated path would have been taken, and someone else may have had the final say. If the parties do not remember the rest of the structure of their discussion, it is hard to go back to to a sensible juncture and hash out a different supporting claim. Usually they will just move on to something else that the last bit reminded them of, and in the future vaguely remember who won the argument, without further value being created. 
  4. Relatedly, it is often unclear to the participants in a discussion what the overall structure of a discussion is, or how the parts relate to one another on even a small scale. For instance, which parts are important to carrying the point, and which parts are watertight. I find that when I write out an argument at length, in a structured way, I notice gaps that weren’t salient informally. And structured arguments look as if they help students reason more clearly.
  5. Disagreement interacts badly with the social signaling purposes of conversation, as it tends to be taken as an aggressive move except when done skillfully. It’s not clear to me whether this is a fundamental problem with collaboratively critiquing ideas or a accident of the social norms we have, both for critique and for attacking people.
  6. Similarly, allocating time in conversations tends to interact badly with social signaling. The person with the best point to listen to next is unlikely to always be the one who should talk next according to fairness, kindness, status, or volume. There have been some attempts to improve this.

This is not near exhaustive. Feel free to suggest more.

I am told that people have often tried to improve conversational norms, but I only know of a few such efforts. These are innovations such as randomized alarms while talking, hand gestures during seminars, argument mapping, and anonymous text conversation while everyone is in the room. I’d like to see a better survey of such attempts, but so far have not. Hopefully I just haven’t guessed the right keywords. Pointers would be appreciated.

For figuring out what is true, it seems many of these problems are resolved by writing a more permanent, public, well structured, outline of a topic of debate, then adding to it as you find new arguments. I think various people are in favor of such a thing, but I haven’t seen it done much (again, pointers appreciated). I have tried this a bit, with Paul Christiano, with the hope that it will either help somewhat, or allow us to figure out the problems with it.

Here are a few examples in progress:

  1. The case for Cool Earth (linked before from my climate research)
  2. US open borders advocacy
  3. Animal activism

So far we don’t seem to have found any irrecoverable problems. However I’m curious to hear your thoughts on the merits of such interventions. Especially if you think this sort of thing is a terrible aid to discussion for the purpose of figuring things out.

In praise of pretending to really try

Ben Kuhn makes some reasonable criticisms of the Effective Altruism movement. His central claim is that in the dichotomy of ‘really trying’ vs. ‘pretending to try’, EAs ‘pretend to really try’.

To be explicit, I understand these terms as follows:

‘Really trying’: directing all of your effort toward actions that you believe have the highest expected value in terms of the relevant goals

Pretending to try‘: choosing actions with the intention of giving observers the impression that you are trying.

Pretending to really try‘: choosing actions with the intention of giving observers the impression that you are trying, where the observers’ standards for identifying ‘trying’ are geared toward a ‘really trying’ model. e.g. they ask whether you are really putting in effort, and whether you are doing what should have highest expected value from your point of view.

Note the normative connotations. ‘Really trying’ is good, ‘pretending to try’ is not, and ‘pretending to really try’ is hypocritical, so better than being straight out bad, but sullied by the inconsistency.

I claim Effective Altruism should not shy away from pretending to try. It should strive to pretend to really try more convincingly, rather than striving to really try.

Why is this? Because Effective Altruism is a community, and the thing communities do well is modulating individual behavior through interactions with others in the community. Most actions a person takes as a result of being part of a community are pretty much going to be ‘pretending to try’ by construction. And such actions are worth having. If they are discouraged, the alternative will not be really trying. And pretending to try well is almost as good as really trying anyway.

Actions taken as a result of being in a community will be selected for being visible, because visible actions are the ones you will be able to pick up from others in the community. This doesn’t necessarily mean you are only pretending to try – it will just happen to look like pretending to try. But actions will also probably be visible, because by assumption you are driven to act in part by your community membership, and most ways such motivation can happen involve a possibility of others in the community being aware of your actions.

Many actions actions in communities are chosen because others are doing them, or others will approve of them, or because that’s what it seemed like a good community member would do, not because they were calculated to be best. And these dynamics help communities work and achieve their goals.

Of course people who were really trying would do better than those  who were only pretending to try. As long as they could determine what a person who was pretending would do, a person who really tries could just copy them where it is useful to have pretending-to-try type behaviors. But there is no option to make everyone into such zealous tryers that they can do as well as the effective altruism movement without the social motivations. The available options are just those of what to encourage and what to criticize. And people who are pretending to really try are hugely valuable, and should not be shamed and discouraged away.

A community of people not motivated by others seeing and appreciating their behavior, not concerned for whether they look like a real community member, and not modeling their behavior on the visible aspects of others’ behavior in the community would generally not be much of a community, and I think would do less well at pursuing their shared goals.

I don’t mean to say that ‘really trying’ is bad, or not a good goal for an individual person. But it is a hard goal for a community to usefully and truthfully have for many of its members, when so much of its power relies on people watching their neighbors and working to fit in. ‘Really trying’ is also a very hard thing to encourage others to do. If people heed your call to ‘really try’ and do the ‘really trying’ things you suggest, this will have been motivated by your criticisms, so seems more like a better quality of pretending to really try, than really trying itself. Unless your social pressure somehow pressured them to stop being motivated by social pressure.

I also fear that pretending to try, to various extents, is underestimated because we like to judge other people for their motives. A dedicated pretender, who feels socially compelled to integrate cutting edge behavioral recommendations into their pretense can be consequentially very valuable, regardless of their virtue on ethical accounts.

So I have argued that pretending to try is both inevitable and useful for communities. I think also that pretending to really try can do almost as well as really trying, as long as someone puts enough effort into identifying chinks in the mask. In the past, people could pretend to try just by giving some money to charity; but after it has been pointed out enough times that this won’t achieve their purported goals,  they have to pick up their act if they want anyone (themselves included) to believe they are trying. I think a lot of progress can come from improving the requirements one must meet to look like one is trying. This means pointing out concrete things that ‘really trying’ people would be doing, and it hopefully leads to pretending to really try, and then pretending to really really try. Honing the pretense, and making sure everyone knows what the current standards are, are important goals in a community with goals that might be forgotten if we undervalue pretending to really try.

Ethical Intuitionism, part 1

I read Michael Huemer’s book Ethical Intuitionism at Bryan Caplan’s suggestion. Before reading it I thought that Ethical Intuitionism seemed an unlikely position and that Bryan and Michael seemed like smart guys, so I hoped I might be persuaded to significantly change my mind. I still believe both of the premises, but did not get as far as changing my mind, so here I’ll report back on some of my reasons. I actually read it a while ago, and am reconstructing some of this post from memories and scribbles in margins, so apologies if this causes any inaccurate criticism.

Ethical intuitionism is the position that there exist real, irreducible, moral properties, and that these can be known about through intuition. i.e. some things are ‘good’ or ‘right’ independent of anyone’s feelings or preferences, and you can learn about this through finding such ideas in your head. I found Huemer’s case for it well written, thought provoking, and for the most part valid. However I thought it had some problems, and was overall not compelling.

Huemer divides metaethics as into five types of theories then argues against non-cognitivism, relativism, nihilism, and naturalism. The last theory standing is intuitionism, which Humer goes on to defend from the criticisms it has been attacked with before. Note that these metaethical theories are both theories of what ethics is and theories of how we can know about it.

I

The first problem is related to intuitionism being primarily a theory of how we can know about ethics (through intuitions), while being hazier on what exactly ethics is, in terms of anything else. It’s something real that we can learn about through our intuitions. It is normative. If the theory contained any more concrete specification of the nature of ethics or how it got there, I think it would become more obviously subject to the same criticisms made of other metaethical theories. For instance, this is one of Huemer’s criticisms of subjectivism:

Fourth, consider the question, why do I approve of the things I approve of? If there is some reason why I approve of things, then it would seem that that reason, and not the mere fact of my approval, explains why they are good. If I approve of x because of some feature that makes it desirable, admirable, etc.,  in some respect, then x’s desirability (etc.) would be an evaluative fact existing prior to my approval. On the other hand, if I approve of x for no reason, or for some reason that does not show x to be desirable (etc.) in any respect, then my approval is merely arbitrary. And why would someone’s arbitrarily approving of something render that something good?

This argument does not seem specific to subjectivism. Take any purported source of morality S. If there is some reason for S justifying what it does, one could similarly say that that reason, and not the mere fact of being allowed by S, explains why they are good. And if S has no reason, then it is arbitrary. And why would arbitrarily being allowed by S render something good?

If we were clearer on the origins of goodness in the intuitionist account, doesn’t it fall prey to one side or the other of the same argument? This argument appears to rule out all sources of goodness.

Perhaps the difference with the intuitionist account here is that it is ok for it to be arbitrary, because while goodness is arbitrary, it isn’t caused by some other arbitrary thing. You can answer ‘why would arbitrarily being good render something good?’, by pointing out that this is logically necessary, whereas on the other accounts you are seeking to equate two things that are not already identical.

On the intuitionist account you are also trying to equate two non-identical things I think – I have merely labeled them ambiguously above. On the one hand, you have ‘goodness’ which is a property you receive statements about via your intuition, and on the other you have ‘goodness’ which implies you should do certain things. If these were not distinguishable, there would be no debate. But it could still be that the goodness which means you should do certain things is primary, and causes the goodness which you observe. Then the former goodness would be arbitrary, but nothing else arbitrary would ‘make’ good things good.

So perhaps the argument for intuitionism is that arbitrariness alone is ok, but it is not ok for a thing to be caused by other arbitrary things. Or perhaps only moral things shouldn’t be caused by other arbitrary things? Or only goodness? I’m not sure how these things work, but this seems very arbitrary, and I have no distinguished way to decide which things should be arbitrary and which things principled.

II

Huemer argues that we should not disregard ‘intuitions’ as vague, untrustworthy, or unscientific. He points out that most of our thoughts and beliefs are built out of intuitions at some level. Our scientific beliefs rest on reasoning that is ultimately justified by a bunch of stuff turning up in our heads – visual perceptions, logical rules, senses of explanatory satisfaction – and us feeling a strong inclination to trust it.

I think this is a good and correct point. However I don’t think it supports his position. On this conceptualization of things, non-intuitionists also reach their ethical judgements based on intuitions. They just take into account a much wider array of intuitions, and build more complicated inferential structures out of them, instead of relying solely on intuitions that directly speak of normativity.  For instance, they have intuitions about causality, and logical inference, and their perceptions, and people. And these intuitions often cause them to accept the claim that people evolved. And they have further intuitions about how disembodied moral truths might behave which make it hard for them to imagine how these would affect the evolution of humans. And they infer that such moral truths do not explain their observed mental characteristics, based on more intuitions about inference and likely states of affairs. And people do this because they find such methods more intuitive than the isolated moral intuitions.

So it seems to me that Huemer needs an argument not for intuitions, but for using very direct, isolated intuitions instead of indirect chains of them. And arguments for this more direct intuitionism seem hard to come by. At the outset, it seems to undermine itself, as it is clearly not what most people find intuitive, and so appears to need support from more complicated constructions of intuitions, if it is to become popular. However this is not a very fatal undermining. Also, indirect inferences from many intuitions woven together have been very fruitful in non-moral arenas, though whether better than direct intuitions is debatable.

In sum, Huemer makes a good case that everything is made of intuitions, but this doesn’t seem to normatively support using short chains of intuitions over the longer, more complex chains that persuade us for instance that if there was an ethical reality out there, there is no reason it would interfere with our evolution such as to instill knowledge of it in our minds.

III

One might argue that Huemer often begged the question, for instance claiming that other ethical theories are wrong because the ethical consequences are contrary to our ethical intuitions. This is perhaps an unsympathetic interpretation – usually he just says something like ‘but X is obviously not good’ – however, I’m not sure how better to interpret this. Since all thoughts can be categorized as intuitions, it would be hard not to use some sort of intuitions to support an intuitionist conclusion. However given the restatement of his position as a narrower ‘direct intuitionism’, it would be nice to at least not use just those kinds of direct intuitions as the measure.

***

The issues of how evolved creatures would come to know of moral truths, or how moral truths would have any physical effect on the world at all, seem like big problems with intuitionism to me. Huemer addresses them briefly, but doesn’t go into enough depth to satisfactorily resolve these problems I think.  I mean to write about this issue in another post.

So in sum, ethical intuitionism seems subject to similar criticisms as other metaethical theories, but avoids them by being nonspecific or begging the question. ‘What do my intuitions say?’ seems like an unfair criterion for correct ethical choices in a contest between intuitionism and other theories. The arguments for intuitions being broadly important and trustworthy do not support this thesis of intuitionism, which is about relying on isolated immediate intuitions over more indirect constructions of intuitions. A vague case can be made for moral intuitions evolving, but I doubt it works in detail, though I have also not yet detailed any argument for this.

Research on climate organizations

Burning rainforest in Brazil

Destruction of tropical rainforest, as prevented by Cool Earth (Photo credit: Wikipedia)

 

Here is the second post summarizing some of the climate change research I did at CEA last summer (the first is here). Below are links to outlines of the reasoning behind a few of the estimates there (they are the ‘structured cases’ mentioned in the GWWC post).

The organizations were investigated to very different levels of detail. This is why Sandbag for instance comes out looking quite cost effective, but is not the recommendation. I basically laid out the argument they gave, but had almost no time to investigate it. Adding details to such estimates seems to reliably worsen their apparent cost-effectiveness a lot, so it is not very surprising if something looks very cost effective at first glance.

The Cool Earth case is the most detailed, though most of the details are rough guesses at much more detailed things. The cases are designed to allow easy amendment if more details or angles on the same details are forthcoming.

As a side note, I don’t think GWWC plans on more climate change research soon, but if anyone else is interested in such research, I’d be happy to hand over some useful (and neatly organized!) bits and pieces.

Cool Earth

Solar Aid

Sandbag

Probabilistic self-defeat arguments

Alvin Plantinga‘s ‘evolutionary argument against naturalism(EAAN) goes like this:

  1. If humans were created by natural selection, and also not under the guidance of a creator (‘naturalism’), then (for various reasons he gives) the probability that their beliefs are accurate is low.
  2. Therefore believing in natural selection and naturalism should lead a reasonable person to abandon these beliefs (among others). i.e. belief in naturalism and natural selection is self defeating.
  3. Therefore a reasonable person should not believe in naturalism and natural selection.
  4. Naturalism is generally taken to imply natural selection, so a reasonable person should not believe in naturalism.

This has been attacked from many directions. I agree with others that what I have called point 1 is dubious. However even on accepting it, it seems to me the argument fails.

Let us break down the space of possibilities under consideration into:

  • A: N&T: naturalism and true beliefs
  • B: N&F: naturalism and false beliefs
  • C: G&T: God and true beliefs

The EAAN says conditional on N, B is more likely than A, and infers from this that one cannot believe in either A or B, since both are included in N.

But there is no obvious reason to lump A and B together. Why not lump B and C together? Suppose we believe ‘natural selection has not produced true beliefs in us’. Then either natural selection has produced false beliefs, or God has produced true beliefs. If we don’t assign very high credence to the latter relative to the former, then we have a version of the EAAN that contradicts its earlier incarnation: ‘natural selection has not produced true beliefs in us’ is self-defeating. So we must believe that natural selection has produced true beliefs in us*.

What if we do assign a very high credence to C over B? It seems we can just break C up into smaller parts, and defeat them one at a time. B is more likely than C&D, where D = “I roll 1 on my n sided dice”, for some value of n. So consider the belief “B or C&D”. This is self-defeating. As it would seem “B or C&E” is, where E is “I roll 2 on my n sided dice”. And so on.

By this reasoning, if there is any possible world that is self-defeating, pretty much any other possible world can be sucked into the defeat. This depends a bit on the details about how unlikely reliable beliefs must be for belief in that situation to be self-defeating, and how the space can be broken up. But generally, this reasoning allows the self-defeatingness of any state of affairs to contaminate any other state of affairs that can be placed in disjunction with it, and that can be broken into states of affairs not much more probable than it.

It seems to me that any reasoning with this property must be faulty. So I suggest probabilistic self-defeat arguments of this form can’t work in general.

It could be that Plantinga means to make a stronger argument, for instance ‘there is no set of beliefs consistent with naturalism under which one’s beliefs have high probability’, but this seems like quite a hard argument to make. I could place a high probability on A for instance.

It could also be that Plantinga means to use further assumptions that make a distinction between grouping A and B together and grouping B and C together. One possibility is that it is important that N is a cause of T or F, but this seems both ad-hoc and possible to get around. At any rate, Plantinga doesn’t seem to articulate further assumptions in the account of his argument that I read, so his argument seems unlikely to be correct as it stands, all other criticisms aside.

*Note that if you wanted to turn the argument against creationism, it seems you could also just expand the space to include creators who don’t produce true beliefs, and  depending on probabilities, use this to defeat the belief in a creator, including one who does produce true beliefs.

The future of values 2: explicit vs. implicit

Relatively minor technological change can move the balance of power between values that already fight within each human. Beeminder empowers a person’s explicit, considered values over their visceral urges, in much the same way that the development of better sling shots empowers one tribe over another.

In the arms race of slingshots, the other tribe may soon develop their own weaponry. In the spontaneous urges vs. explicit values conflict though, I think technology should generally tend to push in one direction. I’m not completely sure which direction that is however.

At first glance, it seems to me that explicit values will tend to have a much better weapons research program. This is because they have the ear of explicit reasoning, which is fairly central to conscious research efforts. It seems hard to intentionally optimize something without admitting at some point in the process that you want it.

When I want to better achieve my explicit goal of eating healthy and cheap food for instance, I can sit down and come up with novel ways to achieve this. Sometimes such schemes even involving trickery of the parts of myself that don’t agree with this goal, so divorced they are from this process. When I want to fulfill my urge to eat cookie dough on the other hand, I less commonly deal with this by strategizing to make cookie dough easier to eat in the future, or to trick other parts of myself into thinking eating cookie dough is a prudent plan.

However this is probably at least partly due to the cookie dough eating values being shortsighted. I’m having trouble thinking of longer term values I have that aren’t explicit on which to test this theory, or at least having trouble admitting to them. This is not very surprising; if they are not explicit, presumably I’m either unaware of them or don’t endorse them.

This model in which explicit values win out could be doubted for other reasons. Perhaps it’s pretty easy to determine unconsciously that you want to live in another suburb because someone you like lives there, and then after you have justified it by saying it will be good for your commute, then all the logistics that you need to be conscious for can still be carried out. In this case it’s easy to almost-optimize something consciously without admitting that you want it. Maybe most cases are like this.

Also note that this model seems to be in conflict with the model of human reasoning as basically involving implicit urges followed up by rationalization. And sometimes at least, my explicit reasoning does seem to find innovative ways to fulfill my spontaneous urges. For instance, it suggests that if I do some more work, then I should be able to eat some cookie dough. One might frame this as conscious reasoning merely manipulating laziness and gluttony to get a better deal for my explicit values. But then rationalization would say that. I think this is ambiguous in practice.

Robin Hanson responds to my question by saying there are not even two sets of values here to conflict, but rather one which sometimes pretends to be another. I think it’s not obvious how that is different, if pretending involves a lot of carrying out what an agent with those values would do.

An important consideration is that a lot of innovation is done by people other than those using it. Even if explicit reasoning helps a lot with innovation, other people’s explicit reasoning may side with your inchoate hankerings. So a big question is whether it’s easier to sell weaponry to implicit or explicit values. On this I’m not sure. Self-improvement products seem relatively popular, and to be sold directly to people more often than any kind of products explicitly designed to e.g. weaken willpower. However products that weaken willpower without an explicit mandate are perhaps more common. Also much R&D for helping people reduce their self-control is sponsored by other organizations, e.g. sellers of sugar in various guises, and never actually sold directly to the customer (they just get the sugar).

I’d weakly guess that explicit values will win the war. I expect future people to have better self-control, and do more what they say they want to do. However this is partly because of other distinctions that implicit and explicit values tend to go along with; e.g. farsighted vs. not. It doesn’t seem that implausible that implicit urges really wear the pants in directing innovation.

The future of values

Humans of today control everything. They can decide who gets born and what gets built. So you might think that they would basically get to decide the future. Nevertheless, there are some reasons to doubt this. In one way or another, resources threaten to escape our hands and land in the laps of others, fueling projects we don’t condone, in aid of values we don’t care for.

A big source of such concern is robots. The problem of getting unsupervised strangers to to carry out one’s will, rather than carrying out something almost but quite like one’s will, has eternally plagued everyone with a cent to tempt such a stranger with. There are reasons to suppose the advent of increasingly autonomous robots with potentially arbitrary goals and psychological tendencies will not improve this problem.

If we avoid being immediately trodden on by a suddenly super-superhuman AI with accidentally alien values, you might still expect a vast new labor class of diligent geniuses with exotic priorities would snatch a bit of influence here and there, and eventually do something you didn’t want with the future we employed them to help out with.

The best scenario for human values surviving far into an era artificial intelligences may be the brain emulation scenario. Here the robot minds start out as close replicas of human minds, naturally with the same values. But this seems bound to be short-lived. It would likely be a competitive world, with strong selection pressures. There would be the motivation and technology to muck around with the minds of existing emulations to produce more useful minds. Many changes that would make a person more useful for another person might involve altering that person’s values.

Regardless of robots, it seems humans will have more scope to change humans’ values in the future. Genetic technologies, drugs, and even simple behavioral hacks could alter values. In general, we understand ourselves better over time, and better understanding yields better control. At first it may seem that more control over the values of humans should cause values to stay more fixed. Designer babies could fall much closer to the tree than children traditionally have, so we might hope to pass our wealth and influence along to a more agreeable next generation.

However even if parents could choose their children to perfectly match their own values, selection effects would determine who had how many children – somewhat more strongly than they can now – and humanity’s values would drift over the years. If parents also choose based on other criteria – if they decide that their children could do without their own soft spot for fudge, and would benefit from a stronger work ethic – then values could change very fast. Or genetic engineering may just produce shifts in values as a byproduct. In the past we have had a safety net because every generation is basically the same genetically, and so we can’t erode what is fundamentally human about ourselves. But this could be unravelled.

Even if individual humans maintain the same values, you might expect innovations in institution design to shift the balance of power between them. For instance, what was once an even fight between selfishness and altruism within you could easily be tipped by the rest of the world making things easier for the side of altruism (as they might like to do, if they were either selfish or altruistic).

Even if you have very conservative expectations about the future, you probably face qualitatively similar changes. If things continue exactly as they have for the last thousands of years, your distant descendants’ values will be as strange to you as yours are to your own distant ancestors.

In sum, there is a general problem with the future: we seem likely lose control of a lot of it. And while in principle some technology seems like it should help with this problem, and it could also create an even tougher challenge.

These concerns have often been voiced, and seem plausible to me. But I summarize them mainly because I wanted to ask another question: what kinds of values are likely to lose influence in the future, and what kinds are likely to gain it? (Selfish values? Far mode values? Long term values? Biologically determined values?)

I expect there are many general predictions you could make about this. And as as a critical input into what the future looks like, future values seem like an excellent thing to make predictions about. I have predictions of my own; but before I tell you mine, what are yours?

Which stage of effectiveness matters most?

Many altruistic endeavors seem overwhelmingly likely to be ineffective compared to what is possible. For instance building schoolsfunding expensive AIDS treatment, and raising awareness about breast cancer and low status.

For many other endeavors, it is possible to tell a story under which they are massively important, and hard to conclusively show that we don’t live in that story. Yet it is also hard to make a very strong case that they are better than a huge number of other activities. For instance, changing policy discourse in China, averting rainforest deforestation or pushing for US immigration reform.

There are also (at least in theory) endeavors that can be reasonably expected to be much better than anything else available. Given current disagreement over what fits in this category, it seems to either be empty at the moment, or highly dependent on values.

An important question for those interested in effective altruism is whether most of the gains from effectiveness are to come from people who support the obviously ineffective endeavors moving to plausibly effective ones, or from people who support the plausibly effective endeavors moving to the very probably effective ones.

One reason this matters is that the first jump requires hardly any new research about actual endeavors, while the second seems to require a lot of it. Another is that the first plan involves engaging quite a different demographic to the second, and probably in a different way. Finally, the second plan requires intellectual standards that can actually filter out the plausible endeavors from the very good ones. Such standards seem hard to develop and maintain. Upholding norms that filter terrible interventions from plausible ones is plenty of work, and probably easier.

My own intuition has been that most of the value will come from the second possibility. However I suspect others have the opposite feeling, or at least aim to exploit the first possibility more at the moment. What do you think? Is the distinction even just?