Should altruists pay for profitable things?

People often claim that activities which are already done for profit are bad altruistic investments, because they will be done anyway, or at least because the low hanging fruit will already be taken. It seems to me that this argument doesn’t generally work, though something like it does sometimes. Paul has written at more length about altruistic investment in profitable ventures; here I want to address just this one specific intuition which seems false.

Suppose there are a large number of things you can invest in, and for each one you can measure private returns (which you get), public returns (which are good for the world, but you don’t control),  or total returns (the sum of those). Also, suppose all returns are diminishing, so if an activity is invested in, it pays off less the next time someone invests in it, both privately and publicly.

Suppose private industry invests in whatever has the highest private returns, until they have nothing left they want to invest. Then there is a market rate of return: on the margin more investment in anything gives the same private return, except for some things which always have lower private returns and are never invested in. This is shown in the below diagram as a line with a certain slope, on the private curve.

investment1 copy

Total returns and private returns to different levels of investment.

There won’t generally be a market rate of total returns, unless people use the total returns to make decisions, instead of the private returns.  But note that if total returns to an endeavor are generally some fraction larger than private returns (i.e. positive externalities are larger than negative ones), then the rates of total returns available across interventions that are invested in for private good  should generally be higher than the market rate of private returns.

So, after the market has invested in the privately profitable things, the slope of every private returns curve for a thing that was invested in at all will be the same, except those that were never invested in. What do you know about those things? That the their private returns slope must be flatter, and that they have been invested in less.

investment 2 copy

Private returns for four different endeavors. Dotted lines show how much people have invested in the endeavor before stopping. At the point where people stop, all of the endeavors have the same rate of returns (slope).

What does this imply about the total value from investing in these different options? This depends on the relationship between private value and total value.

Suppose you knew that private value was always a similar fraction of total value, say 10%. Then everything that had ever been invested in would produce 10x market returns on the margin, while everything that had not been would produce some unknown value which was less than that (since the private fraction would be less than market returns). Then the best social investments are those that have already been invested in by industry.

If, on the other hand, public value was completely unrelated to private value, then all you know about the social value of an endeavor that has already been funded is that it is less than it was initially (because of the diminishing returns). So now you should only fund things that have never been funded (unless you had some other information pointing to a good somewhat funded opportunity).

The real relationship between private value and total value would seem to lie between these extremes, and vary depending on how you choose endeavors to consider.

Note on replaceability

Replaceability complicates things, but it’s not obvious how much it changes the conclusions.

If you invest in something, you will lower the rate of return for the next investor in that endeavor, and so will often push other people out of that area, to invest in something else in the future.

If your altruistic investments tend to displace non-altruists, then the things they will invest in will less suit your goals than if you could have displaced an altruist. This is a downside to investing in profitable things: the area is full of people seeking profits. Whereas if altruists coordinate to only do non-profitable things, then when they displace someone, that person will move to something closer to what the displacing altruist likes.

In a world where social returns on unprofitable things are generally lower than social returns on profitable things though, it would be better to just displace a profit-seeking person who will go and do something else profitable and socially useful, unless you have more insights into the social value of different options than represented in the current model. If you do, then altruists might still do better by coordinating to focus on a small range of profitable and socially valuable activities.

For the first case above, where private value is a constant fraction of total value, replaceability is immaterial. If people move out of your area to invest in another area with equal private returns, they still create the same social value. Though note that with the slightly lower rate of returns on the margin, they will consume a bit more instead of investing. Nonetheless, as without the replaceability considerations, it is best here to invest in profitable ventures.

In the second case, where private and public returns are unrelated, investing in something private will push people to other profitable interventions with random social returns. This is less good than pushing altruists to other unprofitable interventions, but it was already better in this case to invest in non-profitable ventures, so again replaceability doesn’t change the conclusion.

Consider an intermediate case where total returns tend to be higher than private returns, but they are fairly varied. Here replaceability means that the value created from your  investment is basically the average social return on random profitable investments, not on the one you invest in in particular. On this model, that doesn’t change anything (since you were estimating social returns only from whether something was invested in or not), but if  you knew more it would. The basic point here though is that just knowing that something has been invested in is not obviously grounds to think it is more or less good as a social investment.

Conclusions

If you think the social value of an endeavor is at least likely to be greater than its private value, and it is being funded by private industry, you can at least lower bound its total value at market returns. Which is arguably a lot better than many giving opportunities that nobody has ever tried to profit from.

Note that in a specific circumstance you may know other things that can pin down the relationship between private and total value better. For instance, you might expect self-driving cars to produce total value that is many times greater than what companies can internalize, whereas you might expect providers of nootropics to internalize a larger fraction of their value (I’m not sure if this is true). So if hypothetically the former were privately invested in, and the latter not, you would probably like to invest more in the former.

How to buy a truth from a liar

Suppose you want to find out the answer to a binary question, such as ‘would open borders destroy America?’, or ‘should I follow this plan?’. You know someone who has access to a lot of evidence on the question. However you don’t trust them, and in particular, you don’t trust them to show you all of the relevant evidence. Let’s suppose that if they purport to show you a piece of evidence, you can verify that it is real. However since they can tell you about any subset of the evidence they know, they can probably make a case for either conclusion.  So without looking into the evidence yourself, it appears you can’t really employ them to inform you, because you can’t pay them more when they tell the truth. It seems to be a case of ‘he who pays the piper must know the tune’.

But here is a way to hire them: pay them for every time you change your mind on the question, within a given period. Their optimal strategy for revealing evidence to make money must leave you with the correct conclusion (though maybe not all of the evidence), because otherwise they would be able to get in one more mind-change by giving you the remaining evidence. (And their optimal strategy overall can be much like their optimal strategy for making money, if enough money is on the table).

This may appear to rely on you changing your mind according to evidence. However I think it only really requires that you have some probability of changing your mind when given all of the evidence.

Still, you might just hear their evidence, and refuse to officially change your mind, ever. This way you keep your money, and privately know the truth. How can they trust you to change your mind? If the question is related to a course of action, your current belief (a change of which would reward them) can be tied to a commitment to a certain action, were the period to end without further evidence provided. And if the belief is not related to a course of action, you could often make it related, via a commitment to bet.

This strategy seems to work even for ‘ought questions’, without the employee needing to understand or share your values.

Why would info worsen altruistic options?

One might expect that a simple estimation would be equally likely to overestimate or underestimate the true value of interest. For instance, a back of the envelope calculation of the number of pet shops in New York City seems as likely to be too high as too low.

Apparently this doesn’t work for do-gooding. The more you look into an intervention, the worse it gets. At least generally. At least in GiveWell’s experience, and my imagination. I did think it fit my real (brief) experience in evaluating charities, but after listing considerations I considered in my more detailed calculation of Cool Earth’s cost-effectiveness, more are positive than negative (see list at the end). The net effect of these complications was still negative for Cool Earth, and I still feel like the other charities also suffered many negative complications. However I don’t trust my intuition that this is obviously strongly true. More information welcome.

In this post I’ll assume charitable interventions do consistently look worse as you evaluate them more thoroughly, and concern myself with the question of why. Below are a number of attempted explanations for this phenomenon that I and various friends can think of.

Regression to the altruistic intervention mean

Since we are looking for the very best charities, we start with ones that look best. And like anything that looks best, these charities will tend to be less good than they look. This is an explanation Jonah mentions.

In this regression to the mean story, which mean is being regressed to? If it is the mean for charities, then ineffective charities should look better on closer inspection. I find this hard to believe. I suspect that even if a casual analysis suggests $1500 will give someone a one week introduction to food gardening, which they hope will reduce their carbon footprint by 0.1 tonnes per year, the real result of such spending will be much less than the tonne per $1000 implied by a simple calculation. The participant’s time will be lost in attending and gardening, the participants probably won’t follow up by making a garden, they probably won’t keep it up for long, or produce much food from it, and so on. There will also be some friendships formed, some leisure and mental health from any gardening that ultimately happens, some averted trips to the grocery store. My guess is that these positive factors don’t make up for the negative ones any better than they do for more apparently effective charities.

Regression to the possible action mean + value is fragile

Instead perhaps the mean you should regress to is not that of existing charities, but rather that of possible charities, or – similarly – possible actions. This would suggest all apparently positive value charities are worse than they look – the average possible action is probably neutral or negative. There are a lot of actions that just involve swinging your arms around or putting rocks in your ears for instance. Good outcomes are a relatively small fraction of possible outcomes, and similarly, good plans are a probably a relatively small fraction of possible plans.

Advertising

The initial calculation of a charity’s cost-effectiveness usually uses the figures that the charity provided you with. This information might be expected to be both selected for being optimistic looking, whether due to outright dishonesty, or selection from among a number of possible pieces of data they could have told you about.

This seems plausible, but is hard to believe as the main effect I think. For one thing, in many of these cases it would be hard for the charity to be very selective – there are some fairly obvious metrics to measure, and they probably don’t have that many measures of them. For instance, for a tree planting charity, it is very natural for them to tell us how many trees they have planted, and natural for us to look at the overall money they have spent. They could have selected a particularly favorable operationalization of how many trees they planted, but there still doesn’t seem to be that much wiggle room, and this wouldn’t show up (and so produce a difference between the early calculation and later ones) unless one did a very in-depth investigation.

Common relationships tend to make things worse with any change

Another reason that I doubt the advertising explanation is that a very similar thing seems to happen with personal plans, which appear to be less geared toward advertising, at least on relatively non-cynical accounts. That is, if I consider the intervention of catching the bus to work, and I estimate that the bus takes ten minutes and comes every five minutes, and it takes me three minutes to walk at each end, then I might think it will take me 13-18m to get to work. In reality, it will often take longer, and almost never take less time.

I don’t think this is because I go out of my way to describe the process favorably, but rather because almost every change that can be made from the basic setup – where I interact with the bus as planned and nothing else happens – makes things slower rather than faster. If the bus comes late, I will be late. If the bus comes more than a few minutes early, I will miss it and also be late. Things can get in the way of the bus and slow it down arbitrarily, but it is hard for them to get out of the way of the bus and speed it up so much. I can lose my ticket, or not have the correct change, but I can’t benefit much from finding another ticket, or having more change than I need.

These kinds of relationships between factors that can change in the world and the things we want are common. Often a goal requires a few inputs to come together, such that having extra of an input doesn’t help if you don’t have extra of the others, yet having less of one wastes the others. Having an extra egg doesn’t help me make more cake, while missing an egg disproportionately shrinks the cake. Often two things need to meet, so moving either of them in any direction makes things worse. If I need the eggs and the flour to meet in the bowl, pouring either of them anywhere different will cause destruction.

This could be classed under value being fragile, but I think it is worth pointing out the more specific forms this takes. In the case of charities, this might apply because you need a number of people to meet together in the same place as a vaccination clinic has been set up, at the same time as some nurses, and as a large number of specific items.

This might explain why things turn out in practice to be worse than they were in plans, however I’m not sure this is the only thing to be explained. It seems that also if look at plans in more depth (without seeing the messy real-world instantiation) you become more pessimistic. This might just be because you remember to account for the things that might go wrong in the real world. But looking at the Cool Earth case, these kinds of effects don’t seem to account for any of the negative considerations.

Negative feedbacks

Another common kind of relationship between things in the real world is the one where if you change a thing, it produces a force which pushes it back the way it came. For instance, if you donate blankets to the poor, they will acquire fewer blankets in other ways, so in total they will not have as many more blankets as you gave them. Or if you be a vegetarian, the price of meat will go down a little, and someone else will eat more meat. This does account for a few of the factors in the Cool Earth case, so that’s promising. For instance, protecting trees changes the price of wood, and removing carbon from the atmosphere lowers the rate at which other processes remove carbon from the atmosphere.

Abstraction tends to cause overestimation

Another kind of negative consideration in the Cool Earth case is that saving trees really only means saving them from being logged with about 30% probability. Similarly, it only means saving them for some number of years, not indefinitely. I think of these as instances of a general phenomenon where a thing gets labeled as something which makes up a large fraction of it, and then reasoned about as if it entirely consists of that thing. And since the other things that really make it up don’t serve the same purpose in the reasoning, estimates tend to be wrong in the direction of the thing being smaller. For instance, if I intend to walk up a hill, I might conceptualize this as involving entirely walking up the hill, and so make a time estimate from that. Whereas in fact, it will involve some amount of pausing, zigzagging, and climbing over things, which do not have the same quality of moving me up the hill at 3mph. Similarly, an hour’s work contains some non-work often, and a 300 pound cow contains some things other than steaks.

But, you may ask, shouldn’t the costs be underestimated too? And in that case, the cost-effectiveness should come out the same. That does seem plausible. One thought is that prices are often known a lot better than the value of whatever they are prices on, so there is perhaps less room for error. e.g. you can see how much it costs to buy a cow, due to the cow market, but it’s less obvious what you will get out of it. This seems a bit ad hoc, but on the other hand some thought experiments suggest to me that something like this is often going on when things turn out worse than hoped.

Plan-value

If you have a plan, then that has some value. If things change at all, then your plan gets less useful, because it doesn’t apply so well. Thus you should be expected to consistently lose value when reality diverges from your expectations in any direction, which it always does. Again, this mostly seeks to explain why things would turn out worse in reality than expected, but that could explain some details making plans look worse.

Careful evaluators attend to negatives

Suppose you are evaluating a charity, and you realize there is a speculative reason to suspect that the charity is less good than you thought. It’s hard to tell how likely it is, but you feel like it roughly halves the value. I expect you take this into account, though it may be a struggle to do so well. On the other hand, if you think of a speculative positive consideration that feels like it doubles the value of your charity, but it’s hard to put numbers on it, it is more allowable to ignore it. A robust, conservative estimate is often better than a harder to justify, more subjective, but more accurate estimate. Especially in situations where you are evaluating things for others, and trying to be transparent.

This situation may arise in part because people expect most things to be worse than they appear – overestimating value seems like a greater risk than underestimating it does.

Construal level theory

We apparently tend to think of think of valuable things (such as our goals) as more abstract than bad things (such as impediments to those goals). At least this seems plausible, and I vaguely remember reading it in a psychology paper or two, though I can’t find any now. If this is so, then when we do very simple calculations, one might expect them to focus on abstract features of the issue, and so disproportionately the positive features. This seems like a fairly incomplete explanation, as I’m not sure why good things would naturally seem more abstract. I also find this hard to cash out in any concrete cases – it’s hard to run a plausible calculation of the value of giving to Cool Earth that focuses on abstracted bad things, other than the costs which are already included.

Appendix: some examples of more or less simple calculations

A basic calculation of Cool Earth’s cost to reduce a tonne of CO2 825,919 pounds in 2012 x 6 years/(352,000 acres x 260 tonnes/acre) = 6 pence/tonne = 10 cents/tonne

If we take into account:

  • McKinsey suggests approach is relatively cost-effective (positive/neutral)
  • Academic research suggests community-led conservation is effective (positive/neutral)
  • there are plausible stories about market failures (positive/neutral)
  • The CO2 emitted by above ground sources is an underestimate (positive)
  • The 260 tonnes/acre figure comes from one area, but the other areas may differ (neutral/positive)
  • The projects ‘shield’ other areas, which are also not logged as a result (positive)
  • Cool Earth’s activities might produce other costs or benefits (neutral/positive)
  • Upcoming projects will cost a different amount to those we have looked at (neutral/positive)
  • When forestry is averted, those who would have felled it will do something else (neutral/negative)
  • When forestry is averted, the price of wood rises, producing more forestry (negative)
  • Forests are only cleared 30% less than they would be (negative)
  • The cost they claim for protecting one acre is higher than that inferred from dividing their operating costs by what they have protected (negative)
  • The forest may be felled later (negative)
  • Similarity of past and future work (neutral)
  • other effects on CO2 from averting forestry (neutral)
  • CO2 sequestered in wood in long run (neutral)
  • CO2 sequestered in other uses of cleared land (neutral)

…we get $1.34/tonne. However that was 8 positive (or positive/neutral), 4 completely neutral, and only 5 negative (or negative/neutral) modifications.

In case you wonder, the —/neutral considerations made no difference to the calculation, but appear to be if anything somewhat ‘—’. Some of these were not small, but only considered as added support for parts of the story, so improved our confidence but didn’t change the estimate (yeah, I didn’t do the ‘update your prior on each piece of information’ thing, but more or less just multiplied the best numbers I could find or make up together).

Examples of estimates which aren’t goodness-related:

If I try to estimate the size of a mountain, will it seem smaller as I learn more? (yes)

Simple estimate: lets say it takes about 10h to climb, and I think I can walk uphill at about 2mph => 20 mile walk. Let’s say I think the angle is around 10 degrees. Then sin(10) = height/20. Then height = 20sin10 = 3.5 miles

Other considerations:

  • the side is bumpy, so I probably walk 20 miles in less than 20 miles => mountain is shorter
  • the path up the mountain is not straight – it goes around trees and rocks and things, so I probably walk 20 miles in even less than 20 miles => the mountain is shorter
  • when I measured that I could walk at about 2mph, I was probably walking the whole time, whereas when we went up the mountain, probably I really stopped sometimes to look at things, or due to confusion about the path, or whatever, so probably I can’t walk 2mph up the mountain => the walk is probably less than 20 miles. => the mountain is shorter

If I try to estimate the temperature, will it seem lower as I learn more? (neutral)

Simple estimate: I look at the thermometer, and it says a number. I think that’s the temperature.

Further considerations:

  • the thermometer could be in the shade or the sun or the wind (neutral)
  • the thermometer is probably a bit delayed, which could go either way (neutral)
  • I may misread the thermometer a bit depending on whether my eye is even with it – this could go either way (neutral)

 

Setting an example too good

Jeff Kaufman points to a kind of conflict: as he optimizes his life better for improving the world, his life looks less like other people’s lives, so makes a less good example showing that other people could also optimize their lives to make the world better. This seems similar to problems that come up often, regarding whether one should really do the (otherwise) best thing, if it will discourage onlookers from wanting to do the best thing.

These conflicts seem to be a combination of at least two different kinds of problems. Likely more, but I’ll talk about two.

One of them seems like it must be a general problem for people wanting others to follow them in doing a new thing. Since people are not doing the thing, it is weird. If you do the thing a little, you show onlookers that people like them can do it. When you do it a lot, they start to suspect you are just a freak. This might even put them off trying, since they probably aren’t the kind of person who could really succeed.

For instance, if you can kind of juggle, it suggests to an observer that they too could learn to kind of juggle. However if you can juggle fifty burning chairs, they begin to think that you are inherently weird. They also think that they are not cut out for the juggling world, since as far as they know they are not a freak.

This is a problem that both you and the observer would like to resolve – if it is really not very hard to become a good juggler, both of you would like the observer to know that.

The other kind of problem is less cooperative. Instead of observers thinking they can’t reach the extremes you have attained, they may just not want to. It looks weird, after all. You suspect that if they became half as weird as you, they would then want to be as weird as you, so want them to ‘take the gateway drug’. They may also suspect they would behave in this way, and don’t want to, and so would like to avoid becoming weird at all. At this point, you may be tempted to pretend that they would only ever get half as weird as you, because you know they would be happy to be half as weird, as long as it didn’t lead to extreme weirdness. So you may hide your weirdness. In which case you have another fairly general problem: that of wanting to deceive observers.

While there are many partial solutions to the second problem, it is a socially destructive zero sum game that I’m not sure should be encouraged. The first problem seems more tractable and useful to solve.

One way to lessen the first problem is to direct attention to a stream of people between amateur and very successful. If the person who can juggle very impressively tends to hang out with some friends at various intermediate juggling levels, it seems more plausible that these are just a spectrum of skills that people can move through in their adult lives, rather than a discrete cluster of freaks way above the rest. Another way to lessen this effect is to just explicitly claim or demonstrate that the extremal person was in fact relatively recently much like other people, or has endured few costs in their journey - this is the idea behind before/after images, and is also achieved by Jeff’s post. Another kind of solution is drawing attention to yourself before you become very extremal, so that observers can observe your progress, not just the result.

Doubt regarding basic assumptions

Robin wonders (in conversation) why apparently fairly abstract topics don’t get more attention, given the general trend he notices toward more abstract things being higher status. In particular, many topics we and our friends are interested in seem fairly abstract, and yet we feel like they are neglected: the questions of effective altruism, futurism in the general style of FHI, the rationality and practical philosophy of LessWrong, and the fundamental patterns of human behavior which interest Robin. These are not as abstract as mathematics, but they are quite abstract for analyses of the topics they discuss. Robin wants to know why they aren’t thus more popular.

I’m not convinced that more abstract things are more statusful in general, or that it would be surprising if such a trend were fairly imprecise. However supposing they are and it was, here is an explanation for why some especially abstract things seem silly. It might be interesting anyway.

Lemma 1: Rethinking common concepts, and being more abstract tend to go together. For instance, if you want to question the concept ‘cheesecake’ you will tend to do this by developing some more formal analysis of cake characteristics, and showing that ‘cheesecake’ doesn’t line up with the more cutting-nature-at-the-joints distinctions. Then you will introduce another concept which is close to cheesecake, but more useful. This will be one of the more abstract analyses of cheesecakes that has occurred.

Lemma 2: Rethinking common concepts and questioning basic assumptions look pretty similar. If you say ‘I don’t think cheesecake is a useful concept - but this is a prime example of a squishcake’, it sounds a lot like ‘I don’t believe that cheesecakes exist, and I insist on believing in some kind of imaginary squishcake’.

Lemma 3: Questioning basic assumptions is also often done fairly abstractly. This is probably because the more conceptual machinery you use, the more arguments you can make. e.g. many arguments you can make against the repugnant conclusion’s repugnance work better once you have established that aversion to such a scenario is one of a small number of mutually contradictory claims, and have some theory of moral intuitions as evidence. There are a few that just involve pointing out that the people are happy and so on, but where there are a lot of easy non-technical arguments to make against a thing, it’s not generally a basic assumption.

Explanation: Abstract rethinking of common concepts is easily mistaken for questioning basic assumptions. Abstract questioning of basic assumptions really is questioning basic assumptions. And questioning basic assumptions has a strong surface resemblance to not knowing about basic truths, or at least not having a strong gut feeling that they are true.

Not knowing about basic truths is not only a defining characteristic of silly people, but also one of the more hilarious of their many hilarious characteristics. Thus I suspect that when you say ‘I have been thinking about whether we should use three truth values: true, false, and both true and false’, it sounds a lot like ‘My research investigates whether false things are true’, which sounds like ‘I’m yet to discover that truth and falsity are mutually exclusive opposites’, which sounds a bit like ‘I’m just going to go online and check whether China is a real place’.

Some evidence to support this: when we discussed paraconsistent logic at school, it was pretty funny. If I recall, mostly of the humor took the form ‘Priest argues that bla bla bla is true of his system’ …’Yeah, but he doesn’t say whether it’s false, so I’m not sure if we should rely on it’. I feel like the premise was that Priest had some absurdly destructive misunderstanding of concepts, such that none of his statements could be trusted.

Further evidence: I feel like some part of my brain interprets ‘my research focuses on determining whether probability theory is a good normative account of rational belief’ as something like ‘I’m unsure about the answers to questions like ‘what is 50%/(50% + 25%)?”. And that part of my brain is quick to jump in and point out that this is a stupid thing to wonder about, and it totally knows the answers to questions like that.

Other things that I think may sound similar:

  • ‘my research focusses on whether not being born is as bad as dying’ <—> ‘I’m some kind of socially isolated sociopath, and don’t realize that death is really bad’
  • ‘We are trying to develop a model of rational behavior that accounts for the Allais paradox’ <—> ‘we can’t calculate expected utility’
  • ‘Probability and value are not useful concepts, and we should talk about decisions only’ <—> ‘My alien experience of the world does not prominently feature probabilities and values’
  • ‘I am concerned about akrasia’ <—> ‘I’m unaware that agents are supposed to do stuff they want to do’
  • ‘I think the human mind might be made of something like sub-agents’ <—> ‘I’m not familiar with the usual distinction of people from one another’.
  • ‘I think we should give to the most cost-effective charities instead of the ones we feel most strongly for’ <—> ‘Feelings…what are they?’

I’m not especially confident in this. It just seems a bit interesting.

Does increasing peak typing speed help?

Is it worth learning to type faster? If - like me – basically what you do is type, this seems likely to be a pretty clear win, if you have any interventions that would improve it at all. Ryan Carey suggested a painful sounding intervention which improves maximum typing speed a lot, but said that since he usually types substantially below his maximum typing speed, this would not help. His model seems to be that typing speed is basically either bottlenecked by physical typing ability or something else (like thinking speed), and it is not much worth trying to speed up the one that is not bottlenecking the process. This sounds pretty reasonable, but did not match my intuitions, and seemed extremely cheap to test, so I decided to test it.

I tried a number of ways of reducing my typing speed, and chose three that were reasonably spaced across the spectrum (~90wpm, ~60wpm, ~30wpm) on a typing speed test. These were (respectively) Dvorak keyboard layout, Dvorak with my left pinky finger tied up with a rubber band or tape, and Qwerty keyboard layout. I measured each one three times on that test, and three times on longer (3-5m) journaling activities, mostly writing about issues in my life that I wanted to think about anyway. These journaling bouts tended to be faster than I would usually casually write I think, so this does not really test how much peak typing speed improves combination writing/staring into space speed. But they were slow enough to be real journaling, with some real insights, and were substantially slower than peak typing speed.

My results are below. They are a bit dubious, but I think are good enough for the purpose. Moving from the middle method to the top method improved my real speed in proportion to my peak speed. Between the bottom two, it made little difference. Further details are more confusing - that there is no difference between peak and real speed for Qwerty suggests that physical typing is a big bottleneck there, however improving the typing method to handicapped Dvorak - which has a higher peak speed – doesn’t improve real speed much either, suggesting inconsistently that thinking is a huge bottleneck, which also seems implausible if thinking is not such a big bottleneck for higher speeds (implied by the fact that real speeds get a lot higher with better typing methods). But if I wanted to think more about these things, I should probably just do some more tests. I’m not convinced this is worth it, but if anyone else does any, I’m curious to see.

Incidentally I suspect Qwerty gets a boost in journaling relative to typing tests. This is because I have to look at my hands a fair bit to do it, which is harder when you also have to look at the screen sometimes too.

I’m more inclined to trust the patterns in faster speeds, which I say is due to them being much closer to my real typing speed (from which I might improve), but could obviously be because it supports my prior intuitions.

Incidentally, a few plausible-to-me models that would fit either thinking or typing faster increasing real typing speed:

  • Thinking speed increases linearly with exogenously increased typing speed, and is usually the bottleneck - then increasing physical typing ability always increases thinking speed, as does increasing thinking speed itself.
  • You do bursts of thinking and typing, basically one at a time – then your speed is something like a weighted mean of the speeds.
  • You either type or think all the time (this is the bottlenecking activity), and do the other activity part of the time, however when you do the faster one it slows down the bottlenecking activity, so speeding up either activity speeds the entire process.

 

Mean speeds

Test Journal
Dvorak 91 63
H-dvorak 60 40
Qwerty 34 35

All data

Test Journal
Time 1 Time 2 Time 3 Time 4 Time 5 Time 6
Dvorak 90 93 89 50 73 66
H-Dvorak 62 68 51 29 46 44
Qwerty 35 31 36 29 42 35

Some graphs

image (1) image

Intelligence Amplification Interview

Ryan Carey and I discussed intelligence amplification as an altruistic endeavor with Gwern Branwen. Here (docx) (pdf) is a summary of Gwern’s views. Also more permanently locatable on my website.

How to trade money and time

Time has a monetary value to you. That is, money and time can be traded for one another in lots of circumstances, and there are prices that you are willing to take and prices you are not. Hopefully, other things equal, the prices you are willing to take are higher than the ones you aren’t.

Sometimes people object to the claim that time has a value in terms of money, but I think this tends to be a misunderstanding, or a statement about the sacredness of time and mundanity of money. I also suspect that the feeling that time is sacred and money is in some sense not prompts people who believe that money and time can be compared in value in principle to object to actually doing it much. There are further reasons you might object to this too. For instance, perhaps having an explicit value on your time makes you feel stressed, or cold calculations make you feel impersonal, or accurate appraisals of your worth per hour make you feel arrogant or worthless.

Still I think it is good to try to be aware of the value of your time. If you have an item, and you trade it all day long, and you don’t put a consistent value on it, you will be making bad trades all over the place. Imagine if you accepted wages on a day to day basis while refusing to pay any attention to what they were. Firstly, you could do a lot better by paying attention and accepting only the higher ones. But secondly, you would quickly be a target for exploitation, and only be offered the lowest wages.

I don’t think people usually do this badly in their everyday trading off of time and money, because they do have some idea of the trades they are making, just not a clear one. But many other things go into the sense of how much you should pay to buy time in different circumstances, so I think the prices people take vary a lot when they should not. For instance, a person who would not accept a wage below $30/h will waste an hour in an airport because they don’t have internet, instead of buying wifi for $5, because they feel this is overpriced. Or they will search for ten minutes to find a place that sells drinks for $3 instead of $4, because $4 is a lot for a drink. Or they will stand in line for twenty minutes to get the surprisingly cheap and delicious lunch, and won’t think of it as being an expensive lunch now.

I agree that time is very valuable. I just disagree that you should avoid putting values on valuable things. What you don’t explicitly value, you squander.

It can be hard to think of ways that you are trading off money and time in practice. In response to a request for these, below is a list. They are intended to indicate trade-offs which might be helpful if you want to spend more money at the expense of time or vice versa in a given circumstance. Some are  written as if to suggest that you should move in one direction or the other especially - remember that you can generally move in the opposite direction also.

Continue reading

Meta-error: I like therefore I am

I like Scott’s post on what LessWrong has learned in its lifetime. In general I approve of looking back at your past misunderstandings and errors, and trying to figure out what you did wrong. This is often very hard, because it’s hard to remember or imagine what absurd thoughts (or absences of thought) could have produced your past misunderstandings. I think this is especially because nonsensical confusions and oversights tend to be less well-formed, and thus less organizable or memorable than e.g. coherent statements are.

In the spirit of understanding past errors, here is a list of errors which I think spring from a common meta-error. Some are mentioned in Scott’s post, some were mine, some are others’ (especially those who are a combination of smart and naive I think), a few are hypothetical:

  • Because I believe agent-like behavior is obviously better than randomish reactions, I assume I am an agent (debunked!).
  • Because I think it is good to be sad about the third world, and not good to be sad about not having enough vitamin B, I assume the former is why I am sad.
  • Because I explicitly feel that racism is bad, I am presumably not racist.
  • Because my mind contains a line of reasoning that suggests I should not update much against my own capabilities because I am female, presumably I do no such thing.
  • Because I have formulated this argument that it is optimal for me to think about X-risks, I assume I am motivated to (also debunked on LW).
  • Because I follow and endorse arguments against moral realism, and infer that on reflection I prefer to be a consequentialist, I assume I don’t have any strong moral feelings about incest.
  • Because I have received sufficient evidence that I should believe Y, I presumably believe Y now.
  • I don’t believe Y, and the only reason I endorse to not believe things is that you haven’t got enough evidence for them, therefore I must not have enough evidence to believe Y.
  • Because I don’t understand the social role of Christmas, I presume I don’t enjoy it (note that this is a terrible failing of the outside view: none of those people merrily opening their presents understands the social role either).
  • Because I don’t endorse the social role of drinking, I assume I don’t enjoy it.
  • Because signaling sounds bad to me, I assume I don’t do it, or at least not as much as others.
  • Because I know the cost of standing up is small (it must be, it’s so brief and painless!), this cannot be a substantial obstacle to going for a run (debunked!).
  • I know good motives are better than bad motives, so presumably I’m motivated by good motives (unlike the bad people, who are presumably confused over whether good things are the things you should choose)
  • I have determined that polyamory is a good idea and babies are a bad idea, therefore I don’t expect to feel jealousy or any inclination to procreate, in my relationships.

In general, I think the meta problem is failing to distinguish between endorsing a mental characteristic and having that characteristic. Not erroneously believing that the two are closely related, but actually just failing to notice there are two things that might not be the same.

It seems harder to make the same kind of errors with non-mental characteristics. Somehow it’s more obvious to people that saying you shouldn’t smoke is not the same as not smoking.

With mental characteristics however, you don’t know how your brain works much at all, and it’s not obvious what your beliefs and feelings are exactly. And your brain does produce explicit endorsements, so perhaps it is easy to identify those with the mental characteristics that the endorsements are closely related to. Note that explicitly recognizing this meta-error is different from it being integrated into your understanding.

Interview with Cool Earth

An interview with the people of Cool Earth, a charity I investigated and ultimately recommended as a relatively good one while visiting GWWC last summer.