Category Archives: 1

Intelligence Amplification Interview

Ryan Carey and I discussed intelligence amplification as an altruistic endeavor with Gwern Branwen. Here (docx) (pdf) is a summary of Gwern’s views. Also more permanently locatable on my website.

How to trade money and time

Time has a monetary value to you. That is, money and time can be traded for one another in lots of circumstances, and there are prices that you are willing to take and prices you are not. Hopefully, other things equal, the prices you are willing to take are higher than the ones you aren’t.

Sometimes people object to the claim that time has a value in terms of money, but I think this tends to be a misunderstanding, or a statement about the sacredness of time and mundanity of money. I also suspect that the feeling that time is sacred and money is in some sense not prompts people who believe that money and time can be compared in value in principle to object to actually doing it much. There are further reasons you might object to this too. For instance, perhaps having an explicit value on your time makes you feel stressed, or cold calculations make you feel impersonal, or accurate appraisals of your worth per hour make you feel arrogant or worthless.

Still I think it is good to try to be aware of the value of your time. If you have an item, and you trade it all day long, and you don’t put a consistent value on it, you will be making bad trades all over the place. Imagine if you accepted wages on a day to day basis while refusing to pay any attention to what they were. Firstly, you could do a lot better by paying attention and accepting only the higher ones. But secondly, you would quickly be a target for exploitation, and only be offered the lowest wages.

I don’t think people usually do this badly in their everyday trading off of time and money, because they do have some idea of the trades they are making, just not a clear one. But many other things go into the sense of how much you should pay to buy time in different circumstances, so I think the prices people take vary a lot when they should not. For instance, a person who would not accept a wage below $30/h will waste an hour in an airport because they don’t have internet, instead of buying wifi for $5, because they feel this is overpriced. Or they will search for ten minutes to find a place that sells drinks for $3 instead of $4, because $4 is a lot for a drink. Or they will stand in line for twenty minutes to get the surprisingly cheap and delicious lunch, and won’t think of it as being an expensive lunch now.

I agree that time is very valuable. I just disagree that you should avoid putting values on valuable things. What you don’t explicitly value, you squander.

It can be hard to think of ways that you are trading off money and time in practice. In response to a request for these, below is a list. They are intended to indicate trade-offs which might be helpful if you want to spend more money at the expense of time or vice versa in a given circumstance. Some are  written as if to suggest that you should move in one direction or the other especially - remember that you can generally move in the opposite direction also.

Continue reading

Meta-error: I like therefore I am

I like Scott’s post on what LessWrong has learned in its lifetime. In general I approve of looking back at your past misunderstandings and errors, and trying to figure out what you did wrong. This is often very hard, because it’s hard to remember or imagine what absurd thoughts (or absences of thought) could have produced your past misunderstandings. I think this is especially because nonsensical confusions and oversights tend to be less well-formed, and thus less organizable or memorable than e.g. coherent statements are.

In the spirit of understanding past errors, here is a list of errors which I think spring from a common meta-error. Some are mentioned in Scott’s post, some were mine, some are others’ (especially those who are a combination of smart and naive I think), a few are hypothetical:

  • Because I believe agent-like behavior is obviously better than randomish reactions, I assume I am an agent (debunked!).
  • Because I think it is good to be sad about the third world, and not good to be sad about not having enough vitamin B, I assume the former is why I am sad.
  • Because I explicitly feel that racism is bad, I am presumably not racist.
  • Because my mind contains a line of reasoning that suggests I should not update much against my own capabilities because I am female, presumably I do no such thing.
  • Because I have formulated this argument that it is optimal for me to think about X-risks, I assume I am motivated to (also debunked on LW).
  • Because I follow and endorse arguments against moral realism, and infer that on reflection I prefer to be a consequentialist, I assume I don’t have any strong moral feelings about incest.
  • Because I have received sufficient evidence that I should believe Y, I presumably believe Y now.
  • I don’t believe Y, and the only reason I endorse to not believe things is that you haven’t got enough evidence for them, therefore I must not have enough evidence to believe Y.
  • Because I don’t understand the social role of Christmas, I presume I don’t enjoy it (note that this is a terrible failing of the outside view: none of those people merrily opening their presents understands the social role either).
  • Because I don’t endorse the social role of drinking, I assume I don’t enjoy it.
  • Because signaling sounds bad to me, I assume I don’t do it, or at least not as much as others.
  • Because I know the cost of standing up is small (it must be, it’s so brief and painless!), this cannot be a substantial obstacle to going for a run (debunked!).
  • I know good motives are better than bad motives, so presumably I’m motivated by good motives (unlike the bad people, who are presumably confused over whether good things are the things you should choose)
  • I have determined that polyamory is a good idea and babies are a bad idea, therefore I don’t expect to feel jealousy or any inclination to procreate, in my relationships.

In general, I think the meta problem is failing to distinguish between endorsing a mental characteristic and having that characteristic. Not erroneously believing that the two are closely related, but actually just failing to notice there are two things that might not be the same.

It seems harder to make the same kind of errors with non-mental characteristics. Somehow it’s more obvious to people that saying you shouldn’t smoke is not the same as not smoking.

With mental characteristics however, you don’t know how your brain works much at all, and it’s not obvious what your beliefs and feelings are exactly. And your brain does produce explicit endorsements, so perhaps it is easy to identify those with the mental characteristics that the endorsements are closely related to. Note that explicitly recognizing this meta-error is different from it being integrated into your understanding.

Interview with Cool Earth

An interview with the people of Cool Earth, a charity I investigated and ultimately recommended as a relatively good one while visiting GWWC last summer.

New website contents

Rob and Federico of GWWC have made a nice summary of metacharities, which might be useful to those of you interested in cause prioritization as a cause. For future reference, I’ve put in my online collection of useful things (just for useful things which need help with their web presence).

I’ve also put up a few ‘structured cases’ for various claims there. These are intended to be well organized collections of arguments and considerations regarding a particular question, such as ‘should I invest in opening US borders?’. The claims of interest are mostly about how resources can be used to make the world better. The arguments are in progress, and probably not in particularly good formats, nor complete. However I talk to people about these often, so it is good to have them online to point to.

I’ve also put up some puzzles and other things.

Discussions for information

Information and ideas percolate through society in many forms: via research papers, media publications, schools, advocacy campaigns, and perhaps most ubiquitously, private conversations.

Private conversations fulfill purposes other than information processing and transfer, so they cannot be expected to perfectly fulfill such roles, or even to fulfil them particularly well. They convey implicit information about the speakers and their qualities, they manifest social maneuvering, they embody humor and other good feelings, and they make a good circumstance for enjoying company more generally.

The information related roles that conversation can play most obviously include straightforwardly communicating information, and – in a more argumentative fashion – collaboratively figuring out what is true. Even if conversations are usually for other things as well, given that they are a major part of the social information  and dispersal system, one might wonder if they could do these jobs better when those roles are important. For instance, if you partake in ‘work’ conversations with the intention of productively progressing toward specific goals, should you be doing anything other than following your natural conversation instincts?

If you wanted to fulfill these information roles with conversations, here are some ways they seem to fall short in general:

  1. They are rarely recorded or shared usefully. Which is bad because it means they must be repeated anew by many different groups of people, not to mention exactly the same people.
  2. Relatedly, conversations seem to hardly build on one another over time. If I think of a good counterargument to your point, this won’t be available to any of the gazillion others having the same argument in the near future, because neither of us will do anything that makes it so, and both of us will probably forget all but the gist of the discussion by tomorrow. I posit that it is very hard to have a counterargument to a counterargument to a counterargument to an argument widely known, even when the argument and the counterargument are often repeated. Except when participants have a history or perhaps a shared subculture, each discussion basically starts the topic anew. A given discussion rarely gets through many considerations, and doesn’t leave the considerations it gets through in a state to be built upon by other conversations.
  3. Discussions are often structured poorly for analysis, though perhaps ok for information transfer. It is natural for a discussion to take a fairly linear form, because only one sentence can be said at a time. But topics being discussed often don’t fit well in this form. A given statement has many potential counterarguments, and lots of possible pieces of supporting evidence, or vantage points from which to analyze it. The same is true of each of those arguments or supports. So a person may make a statement, then another person may offer a  criticism, and so on for a few levels, at which point one of the people will ‘lose the argument’ because he has no retort. But that corner of the tree was not necessarily critically important to the truth of the initial claim. If at the first level a different argument had been pursued, or different evidence had been offered, a largely unrelated path would have been taken, and someone else may have had the final say. If the parties do not remember the rest of the structure of their discussion, it is hard to go back to to a sensible juncture and hash out a different supporting claim. Usually they will just move on to something else that the last bit reminded them of, and in the future vaguely remember who won the argument, without further value being created. 
  4. Relatedly, it is often unclear to the participants in a discussion what the overall structure of a discussion is, or how the parts relate to one another on even a small scale. For instance, which parts are important to carrying the point, and which parts are watertight. I find that when I write out an argument at length, in a structured way, I notice gaps that weren’t salient informally. And structured arguments look as if they help students reason more clearly.
  5. Disagreement interacts badly with the social signaling purposes of conversation, as it tends to be taken as an aggressive move except when done skillfully. It’s not clear to me whether this is a fundamental problem with collaboratively critiquing ideas or a accident of the social norms we have, both for critique and for attacking people.
  6. Similarly, allocating time in conversations tends to interact badly with social signaling. The person with the best point to listen to next is unlikely to always be the one who should talk next according to fairness, kindness, status, or volume. There have been some attempts to improve this.

This is not near exhaustive. Feel free to suggest more.

I am told that people have often tried to improve conversational norms, but I only know of a few such efforts. These are innovations such as randomized alarms while talking, hand gestures during seminars, argument mapping, and anonymous text conversation while everyone is in the room. I’d like to see a better survey of such attempts, but so far have not. Hopefully I just haven’t guessed the right keywords. Pointers would be appreciated.

For figuring out what is true, it seems many of these problems are resolved by writing a more permanent, public, well structured, outline of a topic of debate, then adding to it as you find new arguments. I think various people are in favor of such a thing, but I haven’t seen it done much (again, pointers appreciated). I have tried this a bit, with Paul Christiano, with the hope that it will either help somewhat, or allow us to figure out the problems with it.

Here are a few examples in progress:

  1. The case for Cool Earth (linked before from my climate research)
  2. US open borders advocacy
  3. Animal activism

So far we don’t seem to have found any irrecoverable problems. However I’m curious to hear your thoughts on the merits of such interventions. Especially if you think this sort of thing is a terrible aid to discussion for the purpose of figuring things out.

In praise of pretending to really try

Ben Kuhn makes some reasonable criticisms of the Effective Altruism movement. His central claim is that in the dichotomy of ‘really trying’ vs. ‘pretending to try’, EAs ‘pretend to really try’.

To be explicit, I understand these terms as follows:

‘Really trying’: directing all of your effort toward actions that you believe have the highest expected value in terms of the relevant goals

Pretending to try‘: choosing actions with the intention of giving observers the impression that you are trying.

Pretending to really try‘: choosing actions with the intention of giving observers the impression that you are trying, where the observers’ standards for identifying ‘trying’ are geared toward a ‘really trying’ model. e.g. they ask whether you are really putting in effort, and whether you are doing what should have highest expected value from your point of view.

Note the normative connotations. ‘Really trying’ is good, ‘pretending to try’ is not, and ‘pretending to really try’ is hypocritical, so better than being straight out bad, but sullied by the inconsistency.

I claim Effective Altruism should not shy away from pretending to try. It should strive to pretend to really try more convincingly, rather than striving to really try.

Why is this? Because Effective Altruism is a community, and the thing communities do well is modulating individual behavior through interactions with others in the community. Most actions a person takes as a result of being part of a community are pretty much going to be ‘pretending to try’ by construction. And such actions are worth having. If they are discouraged, the alternative will not be really trying. And pretending to try well is almost as good as really trying anyway.

Actions taken as a result of being in a community will be selected for being visible, because visible actions are the ones you will be able to pick up from others in the community. This doesn’t necessarily mean you are only pretending to try – it will just happen to look like pretending to try. But actions will also probably be visible, because by assumption you are driven to act in part by your community membership, and most ways such motivation can happen involve a possibility of others in the community being aware of your actions.

Many actions actions in communities are chosen because others are doing them, or others will approve of them, or because that’s what it seemed like a good community member would do, not because they were calculated to be best. And these dynamics help communities work and achieve their goals.

Of course people who were really trying would do better than those  who were only pretending to try. As long as they could determine what a person who was pretending would do, a person who really tries could just copy them where it is useful to have pretending-to-try type behaviors. But there is no option to make everyone into such zealous tryers that they can do as well as the effective altruism movement without the social motivations. The available options are just those of what to encourage and what to criticize. And people who are pretending to really try are hugely valuable, and should not be shamed and discouraged away.

A community of people not motivated by others seeing and appreciating their behavior, not concerned for whether they look like a real community member, and not modeling their behavior on the visible aspects of others’ behavior in the community would generally not be much of a community, and I think would do less well at pursuing their shared goals.

I don’t mean to say that ‘really trying’ is bad, or not a good goal for an individual person. But it is a hard goal for a community to usefully and truthfully have for many of its members, when so much of its power relies on people watching their neighbors and working to fit in. ‘Really trying’ is also a very hard thing to encourage others to do. If people heed your call to ‘really try’ and do the ‘really trying’ things you suggest, this will have been motivated by your criticisms, so seems more like a better quality of pretending to really try, than really trying itself. Unless your social pressure somehow pressured them to stop being motivated by social pressure.

I also fear that pretending to try, to various extents, is underestimated because we like to judge other people for their motives. A dedicated pretender, who feels socially compelled to integrate cutting edge behavioral recommendations into their pretense can be consequentially very valuable, regardless of their virtue on ethical accounts.

So I have argued that pretending to try is both inevitable and useful for communities. I think also that pretending to really try can do almost as well as really trying, as long as someone puts enough effort into identifying chinks in the mask. In the past, people could pretend to try just by giving some money to charity; but after it has been pointed out enough times that this won’t achieve their purported goals,  they have to pick up their act if they want anyone (themselves included) to believe they are trying. I think a lot of progress can come from improving the requirements one must meet to look like one is trying. This means pointing out concrete things that ‘really trying’ people would be doing, and it hopefully leads to pretending to really try, and then pretending to really really try. Honing the pretense, and making sure everyone knows what the current standards are, are important goals in a community with goals that might be forgotten if we undervalue pretending to really try.

Research on climate organizations

Burning rainforest in Brazil

Destruction of tropical rainforest, as prevented by Cool Earth (Photo credit: Wikipedia)

 

Here is the second post summarizing some of the climate change research I did at CEA last summer (the first is here). Below are links to outlines of the reasoning behind a few of the estimates there (they are the ‘structured cases’ mentioned in the GWWC post).

The organizations were investigated to very different levels of detail. This is why Sandbag for instance comes out looking quite cost effective, but is not the recommendation. I basically laid out the argument they gave, but had almost no time to investigate it. Adding details to such estimates seems to reliably worsen their apparent cost-effectiveness a lot, so it is not very surprising if something looks very cost effective at first glance.

The Cool Earth case is the most detailed, though most of the details are rough guesses at much more detailed things. The cases are designed to allow easy amendment if more details or angles on the same details are forthcoming.

As a side note, I don’t think GWWC plans on more climate change research soon, but if anyone else is interested in such research, I’d be happy to hand over some useful (and neatly organized!) bits and pieces.

Cool Earth

Solar Aid

Sandbag

Probabilistic self-defeat arguments

Alvin Plantinga‘s ‘evolutionary argument against naturalism(EAAN) goes like this:

  1. If humans were created by natural selection, and also not under the guidance of a creator (‘naturalism’), then (for various reasons he gives) the probability that their beliefs are accurate is low.
  2. Therefore believing in natural selection and naturalism should lead a reasonable person to abandon these beliefs (among others). i.e. belief in naturalism and natural selection is self defeating.
  3. Therefore a reasonable person should not believe in naturalism and natural selection.
  4. Naturalism is generally taken to imply natural selection, so a reasonable person should not believe in naturalism.

This has been attacked from many directions. I agree with others that what I have called point 1 is dubious. However even on accepting it, it seems to me the argument fails.

Let us break down the space of possibilities under consideration into:

  • A: N&T: naturalism and true beliefs
  • B: N&F: naturalism and false beliefs
  • C: G&T: God and true beliefs

The EAAN says conditional on N, B is more likely than A, and infers from this that one cannot believe in either A or B, since both are included in N.

But there is no obvious reason to lump A and B together. Why not lump B and C together? Suppose we believe ‘natural selection has not produced true beliefs in us’. Then either natural selection has produced false beliefs, or God has produced true beliefs. If we don’t assign very high credence to the latter relative to the former, then we have a version of the EAAN that contradicts its earlier incarnation: ‘natural selection has not produced true beliefs in us’ is self-defeating. So we must believe that natural selection has produced true beliefs in us*.

What if we do assign a very high credence to C over B? It seems we can just break C up into smaller parts, and defeat them one at a time. B is more likely than C&D, where D = “I roll 1 on my n sided dice”, for some value of n. So consider the belief “B or C&D”. This is self-defeating. As it would seem “B or C&E” is, where E is “I roll 2 on my n sided dice”. And so on.

By this reasoning, if there is any possible world that is self-defeating, pretty much any other possible world can be sucked into the defeat. This depends a bit on the details about how unlikely reliable beliefs must be for belief in that situation to be self-defeating, and how the space can be broken up. But generally, this reasoning allows the self-defeatingness of any state of affairs to contaminate any other state of affairs that can be placed in disjunction with it, and that can be broken into states of affairs not much more probable than it.

It seems to me that any reasoning with this property must be faulty. So I suggest probabilistic self-defeat arguments of this form can’t work in general.

It could be that Plantinga means to make a stronger argument, for instance ‘there is no set of beliefs consistent with naturalism under which one’s beliefs have high probability’, but this seems like quite a hard argument to make. I could place a high probability on A for instance.

It could also be that Plantinga means to use further assumptions that make a distinction between grouping A and B together and grouping B and C together. One possibility is that it is important that N is a cause of T or F, but this seems both ad-hoc and possible to get around. At any rate, Plantinga doesn’t seem to articulate further assumptions in the account of his argument that I read, so his argument seems unlikely to be correct as it stands, all other criticisms aside.

*Note that if you wanted to turn the argument against creationism, it seems you could also just expand the space to include creators who don’t produce true beliefs, and  depending on probabilities, use this to defeat the belief in a creator, including one who does produce true beliefs.

The future of values 2: explicit vs. implicit

Relatively minor technological change can move the balance of power between values that already fight within each human. Beeminder empowers a person’s explicit, considered values over their visceral urges, in much the same way that the development of better sling shots empowers one tribe over another.

In the arms race of slingshots, the other tribe may soon develop their own weaponry. In the spontaneous urges vs. explicit values conflict though, I think technology should generally tend to push in one direction. I’m not completely sure which direction that is however.

At first glance, it seems to me that explicit values will tend to have a much better weapons research program. This is because they have the ear of explicit reasoning, which is fairly central to conscious research efforts. It seems hard to intentionally optimize something without admitting at some point in the process that you want it.

When I want to better achieve my explicit goal of eating healthy and cheap food for instance, I can sit down and come up with novel ways to achieve this. Sometimes such schemes even involving trickery of the parts of myself that don’t agree with this goal, so divorced they are from this process. When I want to fulfill my urge to eat cookie dough on the other hand, I less commonly deal with this by strategizing to make cookie dough easier to eat in the future, or to trick other parts of myself into thinking eating cookie dough is a prudent plan.

However this is probably at least partly due to the cookie dough eating values being shortsighted. I’m having trouble thinking of longer term values I have that aren’t explicit on which to test this theory, or at least having trouble admitting to them. This is not very surprising; if they are not explicit, presumably I’m either unaware of them or don’t endorse them.

This model in which explicit values win out could be doubted for other reasons. Perhaps it’s pretty easy to determine unconsciously that you want to live in another suburb because someone you like lives there, and then after you have justified it by saying it will be good for your commute, then all the logistics that you need to be conscious for can still be carried out. In this case it’s easy to almost-optimize something consciously without admitting that you want it. Maybe most cases are like this.

Also note that this model seems to be in conflict with the model of human reasoning as basically involving implicit urges followed up by rationalization. And sometimes at least, my explicit reasoning does seem to find innovative ways to fulfill my spontaneous urges. For instance, it suggests that if I do some more work, then I should be able to eat some cookie dough. One might frame this as conscious reasoning merely manipulating laziness and gluttony to get a better deal for my explicit values. But then rationalization would say that. I think this is ambiguous in practice.

Robin Hanson responds to my question by saying there are not even two sets of values here to conflict, but rather one which sometimes pretends to be another. I think it’s not obvious how that is different, if pretending involves a lot of carrying out what an agent with those values would do.

An important consideration is that a lot of innovation is done by people other than those using it. Even if explicit reasoning helps a lot with innovation, other people’s explicit reasoning may side with your inchoate hankerings. So a big question is whether it’s easier to sell weaponry to implicit or explicit values. On this I’m not sure. Self-improvement products seem relatively popular, and to be sold directly to people more often than any kind of products explicitly designed to e.g. weaken willpower. However products that weaken willpower without an explicit mandate are perhaps more common. Also much R&D for helping people reduce their self-control is sponsored by other organizations, e.g. sellers of sugar in various guises, and never actually sold directly to the customer (they just get the sugar).

I’d weakly guess that explicit values will win the war. I expect future people to have better self-control, and do more what they say they want to do. However this is partly because of other distinctions that implicit and explicit values tend to go along with; e.g. farsighted vs. not. It doesn’t seem that implausible that implicit urges really wear the pants in directing innovation.