Imaginary queues

I thought of an interesting idea, then searched to see if anyone had done it. It seemed like not, so I wrote the below post. Then I looked once more, and found a few instances (finding one makes it easier to find others). I still think the proposal is good, and I have not much idea whether anyone doing it competently. However it is not so novel as hoped, and one must wonder why such things have not been successful enough for me to have heard of them (even after trying unusually hard to hear about them).

Instead of modifying the post, Robin Hanson suggested I put it up more or less as it was before I knew such a thing existed, and then I can compare the details of my suggestion to the details of systems that are real (or real minus any widespread adoption). This can make for a rare test of how different things look if you have an actual business providing a service, versus a daydream about an actual business providing a service. This seems like a good idea to me, so here you have it. The changes I made were adding this part, and removing a half-written couple of sentences at the end about how there didn’t seem to be anyone doing this, though there are some related things. I mean to compare it with a real system another time.

***

Consider how much time people spend in queues. Here is a proposal for reducing that dramatically, with a smartphone app.

The app controls a virtual queue. You tell your phone you want to join the queue. You continue shopping or whatever. Your phone pings you when you are near the front of the queue. You wander over and get in the physical line just before it is your turn.

There are many details to iron out here, but there seem to be plausible solutions:

  1. How can people smoothly use this when there will be a large fraction of people who don’t have it?
    In the simplest case, places that have multiple queues could have one dedicated one. Where this is not practical, the owner of the queue might have a smartphone or tablet at the front running a different version of the software which would allow others to add their names to the queue – in which case they can stand around as usual until called – or their phone numbers and a time estimate, in which case they can also wander a bit.
  2. How do you know how long the queue is?
    The app could trivially tell you this. Ultimately, it could also probably give you better stats on how much time it will likely take to get through it.
  3. How does the person serving the queue know how long the queue is? The queue owner has their own version of the app which shows them the current queue. This allows them to tell the app when the next person is up, make judgements about staff allocation, resolve any disputes about who’s turn it is. The same device might be used to add people to the queue who don’t have the app.
  4. How does the app know whether you have taken your turn?
    You get a final ping, telling you it is your turn. You respond to it by saying that you are doing so, or not. If you say not, then someone else is immediately called up, so if you do it wrong, it will probably be clear. The queue owner also has control of the queue, so if anything confusing happens, they can ask who you are, and tick off the right person on their app.
  5. What if you don’t get there in time?
    You could be moved to the next position in line a couple of times before being pushed out of it entirely.
  6. How does your phone know how long you need to get to the queue? Or do you just have to be within a few minutes of the queue all the time, for instance?
    In the simplest version, having a blanket few minute warning would be plenty useful in many cases. Google now already tells you when you should be leaving to get to a place though, so I assume similar functionality is feasible in the long run.
  7. If you can’t go that far away, would this really be so useful?
    Take the supermarket case. In the worst case, it is probably more amusing to wander around nearby than stand in a queue. In more optimistic but fairly plausible cases, you can do your important shopping then get in queue, then do any more discretionary shopping, and thereby save time. More optimistically, if you come to trust the time estimates, you could start your shopping even after you got in line, and only be not that far away at the very end.
  8. Why would anyone to take this up?
    In the long run, this system seems likely to be much better to me. But in the short run, it might  be confusing or annoying to customers or staff, or not be very smooth, or doing things differently might just be risky in general. There are some inducements to take it up at the start though. Obviously, it might quickly work well, look innovative, and be much less annoying than a queue. Also, at the start, it should cause more customers to wander around your shop while they wait, probably encouraging them to buy more things from you. If you have multiple queues, you might avoid the downside from people feeling forced to do a new thing, and also be able to more gently encourage them to download the app, as they see an inviting looking empty queue. I think another good case is probably restaurants. Many restaurants currently try to have a system like this, using a pen and paper, and people wandering back pretty often and checking in, or sitting around in the sidewalk. Often the queue takes a long time, so it is valuable to be able to leave. In this situation, I think many people would be happier to just get a ping when it is their turn, especially if in the meantime they can see roughly how long the wait is without walking back. In general, queues are pretty annoying already, so it seems it shouldn’t be that hard to be less annoying.
  9. How do you know who is really first if there is someone already there?
    If you get a ping when it is your turn, this should be clear.
  10. Can you join the queue way early, then wander in whenever you feel like it?
    If, when you miss your place, you are given too large a window to come back in, then you might have many people who could just turn up at any point, making it hard to know the real length of the queue. Thus you may want to avoid this. Though if you have a fairly good sense in practice of how often such people show up, it might not be that bad.
  11. Is it too annoying for people to download an app, and subsequently open it?
    Again, it seems best if this kind of thing starts in contexts where there are multiple queues. Also, even if downloading an app is annoying, remember that your alternative is standing in a queue for ten minutes. Standing boredly in a queue also seems like a pretty likely context for someone to download an app, especially an app that will allow them to not do that again. Alternatively, you could partner with a pre-existing app running related services.
  12. Probably many others that don’t strike me at this moment.

In the long run, there would be other useful services you could add. People could get in line whenever, and the app could tell say when to leave home. They could say ahead of time when they want to get to the front of the line, and be added at roughly the right time. The app could support purchase of better places in line from willing traders. You could naturally have good data on traffic, allowing better provision of services.

Overall, it just seems unlikely to me that the best wake to keep track of a list of people in these modern times is to physically line them up.

Owen Cotton-Barratt on GPP

I interviewed Owen Cotton-Barratt about the Global Priorities Project, as part of a ‘shallow investigation’ (inspired by Givewell) into cause prioritization which will be released soon. Notes from the conversation are cross-posted here from the 80000hours blog, and also available in other formats on my website.

***

Participants

  • Owen Cotton-Barratt: Lead Researcher, Global Priorities Project
  • Katja Grace: Research Assistant, Machine Intelligence Research Institute

Notes

This is a summary made by Katja of points made by Owen during a conversation on March 24 2014.

What the Global Priorities Project (GPP) does

The Global Priorities Project is new, and intends to experiment for a while with different types of projects and then work on those that appear highest value in the longer term. Their work will likely address questions about how to prioritize, improve arguments around different options, and will produce recommendations. It will probably be mostly research, but also include for instance some policy lobbying. They will likely do some work with concrete policy-relevant consequences and also some work on general high level arguments that apply to many things. Most features of the project are open to modification after early experimentation. There will be principally two audiences: policy makers and philanthropists, the latter including effective altruists and foundations. GPP has some access to moderately senior government and civil service policy people and are experimenting with the difficulty of pushing for high impact policies.

Research areas

Research topics will be driven by a combination of importance and comparative advantage. GPP is likely to focus on prioritizing broad areas rather than narrower interventions, though these things are closely linked. It is good to keep an eye on object level questions to ensure that you are thinking about things the right way. Owen is interested in developing frameworks for comparing things. This can produce value both in their own evaluations and through introducing metrics that others want to use, and so making proposals more comparable in general.

Work so far

Unprecedented technological risks

GPP has a draft report on unprecedented technological risks. They have shown it to several people involved in policy and received positive feedback. Somebody requested a stack of printed copies for their office, to hand out to people.

How to evaluate projects in ignorance of their difficulty

Owen is working on a paper about estimating returns from projects where we have little idea how difficult they are. Many research tasks seem to fall into this category. For instance, ‘how much money we should be putting into nuclear fusion?’ We have some idea of how good it would be, and not much idea of how hard it is. But we are forced to make decisions about this, so we make an implicit statement about likelihoods. But while it is implicit, we may get it wrong sometimes.

Short term changes

In the short time GPP has existed, it has moved to focus less on policy, because experimenting with others things seemed valuable.

Views on methodology

Long run effects

It has been suggested by others that research on long run consequences of policies is prohibitively difficult. Owen believes that improving our predictions in expectation about these long run consequences is hard but feasible. This is partly because our predictions are currently fairly bad.

There are already some informal arguments in the community surrounding GPP about long run implications which it would be good to write up. For instance, there is an argument that human welfare improvements will tend to be better than animal welfare improvements in the long run, because the former have benefits which compounds over time, while animal welfare does not appear to. This is a good case where short term benefits predictably decouple from long term benefits, while in other cases short term benefits may be a reasonable proxy.

GPP will likely focus on long run effects to some extent, but not solely. Owen believes they are very important. However he also thinks routes to impact involve bringing people on board with the general methodology of prioritization, and more speculative research is less persuasive. He thinks people interested in prioritization tend to think long run impacts dominate the value of interventions, but focusing there too strongly will cause others to write us off. He also thinks that we will ultimately need to use some short-term proxies for long term benefits.

Quantitativeness

Owen is in favor of a relatively high degree of quantification in this kind of research. However this has caveats, and he advocates awareness of possible dangers of quantification. We can be too trusting of the numbers produced in this way. We should be careful about models. Sometimes it is better to throw up our hands and say ‘we don’t know how to model this’ and give some qualitative considerations than proceeding with a bad model. For long term effects, we are at that stage where the best quantified models may be worse than qualitative arguments. However we should be working toward quantification. One benefit of quantification is that it improves conversations and truth seeking. Even before you know how to model a process, if you make explicit models then you can have explicit conversations with people about the models, and about what’s wrong with them or not.

Risks with quantification

Quantification is a natural tool if we want to make comparisons. If we want a shallow picture of what is going on, it is not clear that quantification will be useful.

Trying to break down intuitions into further details can make things worse. You can miss out factors and be unaware of the omission. You can pay too much attention to factors because you put them in your model, or disregard factors because you didn’t. You can confuse people into thinking you are more confident than you are. You can be duped by thinking you have a formula. Many people are quite bad at quantification, which makes it worse to advocate it in general. Then there are simple time costs: explicit quantification is time consuming.

Nonetheless, for questions we are interested in, Owen thinks it is important to try.

Methodological change and progress

GPP hopes to make methodological progress that will be applicable to any decisions. For instance their current work on evaluation under uncertainty about costs arose from their own work on unprecedented technological risks. After they have developed a general methodology there, they can try to apply it to further problems. Back and forth between concrete prioritization and abstract general questions is likely to characterize the work.

It seems generally useful when looking at more high level questions to pay attention to concrete cases, to check that your thinking is applicable and reasonable there.

Resources

The project currently uses much of Owen’s time and a small amount of several others’ time, perhaps summing to around one and a half full time people. It’s hard to estimate, because some of the meetings that involve other people would probably occur if GPP didn’t exist, under another project. Having a label probably makes somewhat more of these things happen. Niel Bowerman has has been putting a nontrivial fraction of his time into it, but he will be cutting back to work on outreach.

The expenses of the organization are largely about one person’s worth of employment, plus some overheads in terms of office rent and sharing administrative staff. Some of the work comes from people being employed by FHI.

Where is the value of cause prioritization in general?

Owen is optimistic about cause prioritization, because it is neglected, and obviously important.

Current best guesses vs. best

Owen thinks there is quite a large range of cost effectiveness between different things, but not absolutely enormous.

Finding new best interventions vs. marginally improving a lot of good spending

There are different routes to impact with cause prioritization. Owen thinks a major route to impact is through laying bricks of prioritisation methodology. This will help people in the future to do better prioritisation (and could be better than anything we manage in the near-term future).

Among direct effects on funding allocation, there are also substantially different kinds of impact you might hope for. You can uncover new very high impact interventions, and do them. Or you can just get a group of people who are currently doing quite good things with their money to do better things with their money. Owen is slightly more optimistic about the latter, but fairly uncertain.

Object vs. meta level research

Prioritization work should be focused in the short term on a mixture of object level research output and methodological progress. GPP’s time will be split fairly evenly between them, perhaps leaning toward the methodological. It can be hard to work on methodology without engaging with more concrete questions.

Why do others neglect cause prioritization?

Owen’s best explanations for the neglect of cause prioritization research in general are that it’s hard, that it’s hard to evaluate, and that academic incentives for research topic choice are not socially optimal. Also, like most research, its costs are concentrated while its benefits are distributed.

Terminology

The term ‘cause prioritization’ seems suboptimal to Owen, and also to others. Sometimes it is good, but it is used more broadly than how people have traditionally thought of ‘causes’ and confuses people. It is also confusing because people think it is about causation. Owen would sometimes talk about ‘intervention areas’. He doesn’t have a good solution in general, but thinks we should be more actively looking for better terms.

Routes to contributing to prioritization

Thoughts on other organizations

Overall Owen thinks the entire area is under-resourced, so it’s great when other people are working on it. Even unsuccessful work will be valuable as it helps us to learn what works.

GIVEWELL LABS

Owen thinks GiveWell Labs is laying a lot of useful groundwork for prioritization work. The ‘shallow investigations’ they have been focusing on so far have their value in aggregating knowledge about causes, by researching the funding landscape, who is working on problems, and what is broadly being done. This knowledge base can then be used by anyone who is thinking about cause prioritization, whether in GiveWell Labs or outside. So there are big positive externalities from making the in-progress research public.

GiveWell Labs haven’t yet turned this broad knowledge of existing work into comparisons between areas or prioritization between them. Owen is keen to see what their approach will be.

LEVERAGE

Leverage are probably doing some prioritization research, which may be very valuable. So far, however, they haven’t published much. Owen would love to see more of their analysis. Communicating is a cost, but not communicating bears the risk that research will be duplicated elsewhere or that things they discover won’t be built upon.

COPENHAGEN CONSENSUS CENTRE

Owen is a big fan of the work the CCC does. They essentially represent expert economic opinion on global prioritisation.

There are a few reasons not to simply use their conclusions. The cost-benefit analysis which underlies most of their recommendations can in some cases miss important indirect effects. They don’t have a methodology which is strong at evaluating speculative work. And their recommendations are from the stance of global policy, which may not be directly applicable to altruists or even national policy-makers. However, their work remains one of the best resources we have today.

Funding GPP

GPP would welcome more funding. It would spend additional money securing the future of the project and hiring more researchers. It’s not clear how hard it is to attract good researchers, as they do not have the funds to hire another person yet, so have not advertised. At the moment money is the limiting factor in scaling up this research.

They would hire people who would similarly work on a variety of small-scale projects which seem important. According to skills they might work more directly on research or on using the research to leverage additional attention and work from the wider community. They would also be interested in hiring someone with more policy expertise. Owen has looked at this a bit, but it is probably not his comparative advantage.

Other conceivable projects

Influencing foundation giving

There are projects to get foundations to share more of their internal research, such as Glasspockets and IssueLab. Since small amounts of prioritization are done inside foundations, one could try to get these sharing efforts to focus more on sharing prioritization research. This sort of project has occurred to Owen in the past, but since these projects (e.g. Glasspockets) are already doing something like this, it doesn’t seem that neglected, so he thought it is probably not the high impact opportunity. Also, what the foundations are doing is likely not what we are thinking of when we say ‘cause prioritization’. They pick an area to focus on, then sometimes try to prioritize based on cost effectiveness within that area.

In response to the suggestion that it is best to focus on getting funders to care about prioritization, Owen thinks that may be true one day, but we first need higher quality research to be persuasive.

Another approach to influencing foundation giving is to get people who think about prioritization the right way into the foundations.

Influencing academia

It might be valuable to try to get cause prioritization taken up within academia, and seen as an academically respectable thing to do. This would help both with making the conclusions look more respectable, and in getting more brainpower from the class of people who would like to work at universities.

Who else does things like this?

The economics profession

We should think of academic economics as a part of the reference class of cause prioritization. A lot of economics focuses on long term effects of things. Economists would think of themselves as the experts on how to prioritize things, fairly justifiably. They have a lot of knowledge, which Owen tries to be broadly familiar with.

Owen thinks despite the fact that economists do a lot of relevant work, they tend not to actually produce prioritization of causes. So there may be a large backlog of relevant work to use in prioritizing causes. Owen has some idea of the landscape, though an imperfect one. He thinks it would be great to get more economists working in the area.

Doing good in a very noisy world

Suppose you live in a world where every time you try to do something good, it gives rise to such a giant waterfall of side effects that half the time the net effect of your actions is bad, and half the time it is good but largely from sources you didn’t anticipate. Also suppose that the analogous thing would happen if  you tried, hypothetically, to do bad things.

It sometimes seems plausible that we do live in such a world, and this sometimes makes it seem that doing good is a hopeless affair.

However I propose that in the most plausible worlds like this, when you try to do good things, in expectation you do a bit of good, and the good is merely overwhelmed by a vast random term, with expected value zero. In which case, even though your actions cause net bad half the time, they have positive expected value, and are about as good in expectation as you thought before considering the side effects. Is that so hopelessness inducing?

If so, consider a related scenario. Every time you do anything, it has exactly the desired consequences, and no others of importance. Except that it also causes a random number generator to run, and add or subtract a random amount of utility from the world, with expected value zero. Does this seem hopeless, or do you just ignore the random number generator, and do good things?

If our world is very noisy like this, is the aforementioned model a good description?

Inspired by a conversation with Paul Christiano, in which he said something like this.

Intuitions and utilitarianism

Bryan Caplan:

When backed into a corner, most hard-line utilitarians concede that the standard counter-examples seem extremely persuasive.  They know they’re supposed to think that pushing one fat man in front of a trolley to save five skinny kids ismorally obligatory.  But the opposite moral intuition in their heads refuses to shut up.

Why can’t even utilitarians fully embrace their own theory? 

He raises this question to argue that ‘there was evolutionary pressure to avoid activities such as pushing people in front of trolleys’ is not an adequate debunking explanation of the moral intuition, since there was also plenty of evolutionary pressure to like not dying, and other things that we generally think of as legitimately good. 

I agree that one can’t easily explain away the intuition that it is bad to push fat men in front of trolleys with evolution, since evolution is presumably largely responsible for all intuitions, and I endorse intuitions that exist solely because of evolutionary pressures. 

Bryan’s original question doesn’t seem so hard to answer though. I don’t know about other utilitarian-leaning people, but while my intuitions do say something like:

‘It is very bad to push the fat man in front of the train, and I don’t want to do it’

They also say something like:

‘It is extremely important to save those five skinny kids! We must find a way!’

So while ‘the opposite intuition refuses to shut up’, if the so-called counterexample is persuasive, it is not in the sense that my intuitions agree that one should not push the fat man, and my moral stance insists on the opposite. My moral intuitions are on both sides.

Given that I have conflicting intuitions, it seems that any account would conflict with some intuitions. So seeing that utilitarianism conflicts with some intuitions here does not seem like much of a mark against utilitarianism. 

The closest an account might get to not conflicting with any intuitions would be if it said ‘pushing the fat man is terrible, and not saving the kids is terrible too. I will weigh up how terrible each is and choose the least bad option’. Which is what utilitarianism does. An account could probably concord more with these intuitions than utilitarianism does, if it weighed up the strength of the two intuitions instead of weighing up the number of people involved. 

I’m not presently opposed to an account like that I think, but first it would need to take into account some other intuitions I have, some of which are much stronger than the above intuitions: 

  • Five is five times larger than one
  • People’s lives are in expectation worth roughly the same amount as one another, all else equal
  • Youth and girth are not very relevant to the value of life (maybe worth a factor of two, for difference in life expectancy)
  • I will be held responsible if I kill anyone, and this will be extremely bad for me
  • People often underestimate how good for the world it would be if they did a thing that would be very bad for them.
  • I am probably like other people in a given way, in in expectation
  • I should try to make the future better
  • Doing a thing and failing to stop the thing have very similar effects on the future.
  • etc.

So in the end, this would end up much like utilitarianism.

Do others just have different moral intuitions? Is there anything wrong with this account of utilitarians not ‘fully embracing’ their own theory, and nonetheless having a good, and highly intuitive, theory?

Reminders without times

Many times in life, a person wants to do a thing at a different time. For this to happen, the person has to remember about this, at the different time.  We have very good systems this, as long as the time can be specified in terms of time. That is, if you can say ‘I want to do this in three days’ or ‘remind me at 2pm tomorrow’, then you can look at a calendar every day, or make alarms and electronic alerts and so on. We also have reasonable systems if the other time doesn’t have to be very specific, beyond ‘later’. One can make a to-do list, or just leave the bill in the middle of the floor.

As far as I know, we have no such excellent ways to remember things at a specific point if the point is known by some other feature, such as ‘the next time at which I’m talking to my mother’, ‘next time I visit Chicago’, or ‘when I’m in a conversation and it seems awkward’.

In general, it is hard to do things when some fact obtains. This is partly because you are unlikely to be constantly checking whether that fact obtains, especially if you have many facts to check for. You can’t just go around asking yourself ‘am I having an awkward conversation? Am I driving? Am I standing up? Am I with Michael?…’. You are of course aware of all of these things anyway, in some sense. If someone asked you whether you were just driving, you would be able to respond without checking. However this does not seem to be sufficient awareness for you to reliably do a thing that you intended to do when driving. Somehow you have to both be aware of the driving, and aware of the ‘if driving, then practice singing’ implication, at the same time, and make the connection.

I’ve thought a bit about how to improve various aspects of my life, and realized after a bit  that most of them are hindered by this problem, which is why it got my attention. It seems like I could shower faster, remember new names better, and improve my posture more, if only I noticed when I was in the correct situations to behave in the ways that I would like.

One basic problem is that you can describe a situation in many ways, so even if you ask yourself often ‘what am I doing?’, your description may not involve ‘I’m standing up’, so you won’t remember that you should adopt a good posture.

Here are some suggested solutions to this problem, in case you are interested. I don’t know if any are good, but thought I should share them, since I bothered to find them:

Incentives

Reward yourself. e.g. put some candy in your pocket, and every time you pay attention to whether your conversation partner is getting a word in, you get some. Alternatively, give yourself a ‘behavioral reward’ – smile or say ‘yay’ or something. Ideally, the reward should come quickly after the behavior. As well as reinforcement, a reward that you are aware of will occasionally remind you of the desired behavior probably. e.g. when I see the pack of strawberry buttons in my bag, I remember what I have to do to get them.

Introduce a reward that you will frequently want, which can be combined with the activity. e.g. take up nicotine gum, and only chew it when you have thought about whether you are going about your current activity in a sensible manner. Always get a coffee at lunch time, then don’t sip it unless you are wearing ear plugs.

Reward yourself for even noticing the context. e.g. if you are in a conversation, and someone says their name for the first time, if you manage to say to yourself ‘hey, a name!’ then you get a prize later. Once you can do this, move up to actually taking the intended action (e.g. remembering the name).

Offer a prize to others if they notice you in the context, without doing the correct thing. e.g. give a dollar to your partner every time they see you slouching.

If you know there will be a desirable thing present at the time you should remember, then make the desirable thing contingent on remembering – e.g. if you know that at the time when you will want yourself to close your email, you will also want to look at a webcomic, allow yourself to look at the webcomic if you close your email. Hopefully at the time when you are considering whether you should look at the webcomic, you will remember that you have a great excuse to, as long as you close your email.

Count times you do the thing, or don’t do it. For instance, if you don’t want to touch your face throughout the day, a tally of the number of times you do it can help.

Make sure you don’t feel bad when you do the thing correctly, for some exogenous reason. e.g. if every time you pay extra attention to what the other party in a conversation wants to talk about, you feel guilty for not doing this naturally, you may be dissuaded from paying attention.

Social effects

Tell others that you are in favor of this thing (though I’ve also heard that committing to things publicly is actively harmful). e.g. If you tell others that you endorse thinking carefully before taking on commitments, you might feel more like the kind of person who does that, and remember to pause and evaluate the next request before agreeing to it.

Associate with people who endorse the thing. e.g. if you want to remember to speak more loudly and clearly, perhaps spend a bit of time at Toastmasters or an acting group.

Other strengthening of mental connections

Choose a more salient contextual trigger, and remember (using any of these techniques) to look for the less salient one when you see the more salient one. e.g. when you are in a lift, remember to check whether you are thinking about something pointless or good.

Visualize the connection: vividly imagine the situation that you want to do the thing in, and imagine yourself doing the thing. Put in lots of details. e.g. if you want to remember to ask an economist a particular question, next time you are talking to an economist, then think about the economists you are likely to talk to, and what economists are like in general, and the kinds of things you might be talking about with one, and the places you might be, and the kind of little cocktail sausages you might be eating, and imagine your awkward segue into this question, and asking them it, and waiting for them to answer.

Offline practice. Actually do the thing you want yourself to do, a number of times. e.g. if you want to do pushups while you wait for the microwave, then go and put something in the microwave right now and do pushups until it finishes. Then do that again, several times. (Try not to get the thing too hot).

Say out loud what you are going to do. e.g. ‘whenever I’m eating, I’m going to watch machine learning lectures’.

External reminders

Modern phone capabilities. You might be able to set it to tell you the next time you are entering the supermarket, or driving a car, etc. If not now, perhaps next time you get a phone.

Large numbers of reminders at not particularly special times. e.g. an alert which comes up on your phone or computer twenty times a day, asking if you are currently hyperventilating. I know someone who just looks through a list of possible contexts that things to remember depend on, roughly every day. e.g. Am I going to New York today? Nope. Am I going to the dentist?…

Noticeable accoutrements. e.g. if you wear a shiny bracelet, or an annoying rubber band, or an itchy sweater, you might just notice it very often. Then every time you notice it, you can say to yourself ‘am I projecting my voice right now?’. This requires you to learn the connection between seeing the shiny bling and asking the question, but that might be easier.

Sticky notes in relevant places. e.g. in your car, ‘look at the road!’.

Make the thing be at a specifiable time. e.g. set an alarm for 6pm which tells you to both eat your meal and call your mother, instead of trying to remember to call your mother whenever you happen to be eating.

Situation design

Change the situation to be one where you will more likely do the thing. If you want to remember to take a tablet with your meal, put the tablets next to the plates. If you want to remember to work out while you watch TV, put the weights in front of the TV. This kind of thing is closely related to making things easier to do, such that you can do them most of the time when you remember them, instead of mostly putting them off.

Make your routine avoid things you don’t want to happen. e.g. if you want to remember to suppress your compulsion to wash your hands, put the soap in the cupboard.

***

I repeat: I don’t know which of these work. I haven’t put a huge amount of time into it.

Should altruists pay for profitable things?

People often claim that activities which are already done for profit are bad altruistic investments, because they will be done anyway, or at least because the low hanging fruit will already be taken. It seems to me that this argument doesn’t generally work, though something like it does sometimes. Paul has written at more length about altruistic investment in profitable ventures; here I want to address just this one specific intuition which seems false.

Suppose there are a large number of things you can invest in, and for each one you can measure private returns (which you get), public returns (which are good for the world, but you don’t control),  or total returns (the sum of those). Also, suppose all returns are diminishing, so if an activity is invested in, it pays off less the next time someone invests in it, both privately and publicly.

Suppose private industry invests in whatever has the highest private returns, until they have nothing left they want to invest. Then there is a market rate of return: on the margin more investment in anything gives the same private return, except for some things which always have lower private returns and are never invested in. This is shown in the below diagram as a line with a certain slope, on the private curve.

investment1 copy

Total returns and private returns to different levels of investment.

There won’t generally be a market rate of total returns, unless people use the total returns to make decisions, instead of the private returns.  But note that if total returns to an endeavor are generally some fraction larger than private returns (i.e. positive externalities are larger than negative ones), then the rates of total returns available across interventions that are invested in for private good  should generally be higher than the market rate of private returns.

So, after the market has invested in the privately profitable things, the slope of every private returns curve for a thing that was invested in at all will be the same, except those that were never invested in. What do you know about those things? That the their private returns slope must be flatter, and that they have been invested in less.

investment 2 copy

Private returns for four different endeavors. Dotted lines show how much people have invested in the endeavor before stopping. At the point where people stop, all of the endeavors have the same rate of returns (slope).

What does this imply about the total value from investing in these different options? This depends on the relationship between private value and total value.

Suppose you knew that private value was always a similar fraction of total value, say 10%. Then everything that had ever been invested in would produce 10x market returns on the margin, while everything that had not been would produce some unknown value which was less than that (since the private fraction would be less than market returns). Then the best social investments are those that have already been invested in by industry.

If, on the other hand, public value was completely unrelated to private value, then all you know about the social value of an endeavor that has already been funded is that it is less than it was initially (because of the diminishing returns). So now you should only fund things that have never been funded (unless you had some other information pointing to a good somewhat funded opportunity).

The real relationship between private value and total value would seem to lie between these extremes, and vary depending on how you choose endeavors to consider.

Note on replaceability

Replaceability complicates things, but it’s not obvious how much it changes the conclusions.

If you invest in something, you will lower the rate of return for the next investor in that endeavor, and so will often push other people out of that area, to invest in something else in the future.

If your altruistic investments tend to displace non-altruists, then the things they will invest in will less suit your goals than if you could have displaced an altruist. This is a downside to investing in profitable things: the area is full of people seeking profits. Whereas if altruists coordinate to only do non-profitable things, then when they displace someone, that person will move to something closer to what the displacing altruist likes.

In a world where social returns on unprofitable things are generally lower than social returns on profitable things though, it would be better to just displace a profit-seeking person who will go and do something else profitable and socially useful, unless you have more insights into the social value of different options than represented in the current model. If you do, then altruists might still do better by coordinating to focus on a small range of profitable and socially valuable activities.

For the first case above, where private value is a constant fraction of total value, replaceability is immaterial. If people move out of your area to invest in another area with equal private returns, they still create the same social value. Though note that with the slightly lower rate of returns on the margin, they will consume a bit more instead of investing. Nonetheless, as without the replaceability considerations, it is best here to invest in profitable ventures.

In the second case, where private and public returns are unrelated, investing in something private will push people to other profitable interventions with random social returns. This is less good than pushing altruists to other unprofitable interventions, but it was already better in this case to invest in non-profitable ventures, so again replaceability doesn’t change the conclusion.

Consider an intermediate case where total returns tend to be higher than private returns, but they are fairly varied. Here replaceability means that the value created from your  investment is basically the average social return on random profitable investments, not on the one you invest in in particular. On this model, that doesn’t change anything (since you were estimating social returns only from whether something was invested in or not), but if  you knew more it would. The basic point here though is that just knowing that something has been invested in is not obviously grounds to think it is more or less good as a social investment.

Conclusions

If you think the social value of an endeavor is at least likely to be greater than its private value, and it is being funded by private industry, you can at least lower bound its total value at market returns. Which is arguably a lot better than many giving opportunities that nobody has ever tried to profit from.

Note that in a specific circumstance you may know other things that can pin down the relationship between private and total value better. For instance, you might expect self-driving cars to produce total value that is many times greater than what companies can internalize, whereas you might expect providers of nootropics to internalize a larger fraction of their value (I’m not sure if this is true). So if hypothetically the former were privately invested in, and the latter not, you would probably like to invest more in the former.

How to buy a truth from a liar

Suppose you want to find out the answer to a binary question, such as ‘would open borders destroy America?’, or ‘should I follow this plan?’. You know someone who has access to a lot of evidence on the question. However you don’t trust them, and in particular, you don’t trust them to show you all of the relevant evidence. Let’s suppose that if they purport to show you a piece of evidence, you can verify that it is real. However since they can tell you about any subset of the evidence they know, they can probably make a case for either conclusion.  So without looking into the evidence yourself, it appears you can’t really employ them to inform you, because you can’t pay them more when they tell the truth. It seems to be a case of ‘he who pays the piper must know the tune’.

But here is a way to hire them: pay them for every time you change your mind on the question, within a given period. Their optimal strategy for revealing evidence to make money must leave you with the correct conclusion (though maybe not all of the evidence), because otherwise they would be able to get in one more mind-change by giving you the remaining evidence. (And their optimal strategy overall can be much like their optimal strategy for making money, if enough money is on the table).

This may appear to rely on you changing your mind according to evidence. However I think it only really requires that you have some probability of changing your mind when given all of the evidence.

Still, you might just hear their evidence, and refuse to officially change your mind, ever. This way you keep your money, and privately know the truth. How can they trust you to change your mind? If the question is related to a course of action, your current belief (a change of which would reward them) can be tied to a commitment to a certain action, were the period to end without further evidence provided. And if the belief is not related to a course of action, you could often make it related, via a commitment to bet.

This strategy seems to work even for ‘ought questions’, without the employee needing to understand or share your values.

Why would info worsen altruistic options?

One might expect that a simple estimation would be equally likely to overestimate or underestimate the true value of interest. For instance, a back of the envelope calculation of the number of pet shops in New York City seems as likely to be too high as too low.

Apparently this doesn’t work for do-gooding. The more you look into an intervention, the worse it gets. At least generally. At least in GiveWell’s experience, and my imagination. I did think it fit my real (brief) experience in evaluating charities, but after listing considerations I considered in my more detailed calculation of Cool Earth’s cost-effectiveness, more are positive than negative (see list at the end). The net effect of these complications was still negative for Cool Earth, and I still feel like the other charities also suffered many negative complications. However I don’t trust my intuition that this is obviously strongly true. More information welcome.

In this post I’ll assume charitable interventions do consistently look worse as you evaluate them more thoroughly, and concern myself with the question of why. Below are a number of attempted explanations for this phenomenon that I and various friends can think of.

Regression to the altruistic intervention mean

Since we are looking for the very best charities, we start with ones that look best. And like anything that looks best, these charities will tend to be less good than they look. This is an explanation Jonah mentions.

In this regression to the mean story, which mean is being regressed to? If it is the mean for charities, then ineffective charities should look better on closer inspection. I find this hard to believe. I suspect that even if a casual analysis suggests $1500 will give someone a one week introduction to food gardening, which they hope will reduce their carbon footprint by 0.1 tonnes per year, the real result of such spending will be much less than the tonne per $1000 implied by a simple calculation. The participant’s time will be lost in attending and gardening, the participants probably won’t follow up by making a garden, they probably won’t keep it up for long, or produce much food from it, and so on. There will also be some friendships formed, some leisure and mental health from any gardening that ultimately happens, some averted trips to the grocery store. My guess is that these positive factors don’t make up for the negative ones any better than they do for more apparently effective charities.

Regression to the possible action mean + value is fragile

Instead perhaps the mean you should regress to is not that of existing charities, but rather that of possible charities, or – similarly – possible actions. This would suggest all apparently positive value charities are worse than they look – the average possible action is probably neutral or negative. There are a lot of actions that just involve swinging your arms around or putting rocks in your ears for instance. Good outcomes are a relatively small fraction of possible outcomes, and similarly, good plans are a probably a relatively small fraction of possible plans.

Advertising

The initial calculation of a charity’s cost-effectiveness usually uses the figures that the charity provided you with. This information might be expected to be both selected for being optimistic looking, whether due to outright dishonesty, or selection from among a number of possible pieces of data they could have told you about.

This seems plausible, but is hard to believe as the main effect I think. For one thing, in many of these cases it would be hard for the charity to be very selective – there are some fairly obvious metrics to measure, and they probably don’t have that many measures of them. For instance, for a tree planting charity, it is very natural for them to tell us how many trees they have planted, and natural for us to look at the overall money they have spent. They could have selected a particularly favorable operationalization of how many trees they planted, but there still doesn’t seem to be that much wiggle room, and this wouldn’t show up (and so produce a difference between the early calculation and later ones) unless one did a very in-depth investigation.

Common relationships tend to make things worse with any change

Another reason that I doubt the advertising explanation is that a very similar thing seems to happen with personal plans, which appear to be less geared toward advertising, at least on relatively non-cynical accounts. That is, if I consider the intervention of catching the bus to work, and I estimate that the bus takes ten minutes and comes every five minutes, and it takes me three minutes to walk at each end, then I might think it will take me 13-18m to get to work. In reality, it will often take longer, and almost never take less time.

I don’t think this is because I go out of my way to describe the process favorably, but rather because almost every change that can be made from the basic setup – where I interact with the bus as planned and nothing else happens – makes things slower rather than faster. If the bus comes late, I will be late. If the bus comes more than a few minutes early, I will miss it and also be late. Things can get in the way of the bus and slow it down arbitrarily, but it is hard for them to get out of the way of the bus and speed it up so much. I can lose my ticket, or not have the correct change, but I can’t benefit much from finding another ticket, or having more change than I need.

These kinds of relationships between factors that can change in the world and the things we want are common. Often a goal requires a few inputs to come together, such that having extra of an input doesn’t help if you don’t have extra of the others, yet having less of one wastes the others. Having an extra egg doesn’t help me make more cake, while missing an egg disproportionately shrinks the cake. Often two things need to meet, so moving either of them in any direction makes things worse. If I need the eggs and the flour to meet in the bowl, pouring either of them anywhere different will cause destruction.

This could be classed under value being fragile, but I think it is worth pointing out the more specific forms this takes. In the case of charities, this might apply because you need a number of people to meet together in the same place as a vaccination clinic has been set up, at the same time as some nurses, and as a large number of specific items.

This might explain why things turn out in practice to be worse than they were in plans, however I’m not sure this is the only thing to be explained. It seems that also if look at plans in more depth (without seeing the messy real-world instantiation) you become more pessimistic. This might just be because you remember to account for the things that might go wrong in the real world. But looking at the Cool Earth case, these kinds of effects don’t seem to account for any of the negative considerations.

Negative feedbacks

Another common kind of relationship between things in the real world is the one where if you change a thing, it produces a force which pushes it back the way it came. For instance, if you donate blankets to the poor, they will acquire fewer blankets in other ways, so in total they will not have as many more blankets as you gave them. Or if you be a vegetarian, the price of meat will go down a little, and someone else will eat more meat. This does account for a few of the factors in the Cool Earth case, so that’s promising. For instance, protecting trees changes the price of wood, and removing carbon from the atmosphere lowers the rate at which other processes remove carbon from the atmosphere.

Abstraction tends to cause overestimation

Another kind of negative consideration in the Cool Earth case is that saving trees really only means saving them from being logged with about 30% probability. Similarly, it only means saving them for some number of years, not indefinitely. I think of these as instances of a general phenomenon where a thing gets labeled as something which makes up a large fraction of it, and then reasoned about as if it entirely consists of that thing. And since the other things that really make it up don’t serve the same purpose in the reasoning, estimates tend to be wrong in the direction of the thing being smaller. For instance, if I intend to walk up a hill, I might conceptualize this as involving entirely walking up the hill, and so make a time estimate from that. Whereas in fact, it will involve some amount of pausing, zigzagging, and climbing over things, which do not have the same quality of moving me up the hill at 3mph. Similarly, an hour’s work contains some non-work often, and a 300 pound cow contains some things other than steaks.

But, you may ask, shouldn’t the costs be underestimated too? And in that case, the cost-effectiveness should come out the same. That does seem plausible. One thought is that prices are often known a lot better than the value of whatever they are prices on, so there is perhaps less room for error. e.g. you can see how much it costs to buy a cow, due to the cow market, but it’s less obvious what you will get out of it. This seems a bit ad hoc, but on the other hand some thought experiments suggest to me that something like this is often going on when things turn out worse than hoped.

Plan-value

If you have a plan, then that has some value. If things change at all, then your plan gets less useful, because it doesn’t apply so well. Thus you should be expected to consistently lose value when reality diverges from your expectations in any direction, which it always does. Again, this mostly seeks to explain why things would turn out worse in reality than expected, but that could explain some details making plans look worse.

Careful evaluators attend to negatives

Suppose you are evaluating a charity, and you realize there is a speculative reason to suspect that the charity is less good than you thought. It’s hard to tell how likely it is, but you feel like it roughly halves the value. I expect you take this into account, though it may be a struggle to do so well. On the other hand, if you think of a speculative positive consideration that feels like it doubles the value of your charity, but it’s hard to put numbers on it, it is more allowable to ignore it. A robust, conservative estimate is often better than a harder to justify, more subjective, but more accurate estimate. Especially in situations where you are evaluating things for others, and trying to be transparent.

This situation may arise in part because people expect most things to be worse than they appear – overestimating value seems like a greater risk than underestimating it does.

Construal level theory

We apparently tend to think of think of valuable things (such as our goals) as more abstract than bad things (such as impediments to those goals). At least this seems plausible, and I vaguely remember reading it in a psychology paper or two, though I can’t find any now. If this is so, then when we do very simple calculations, one might expect them to focus on abstract features of the issue, and so disproportionately the positive features. This seems like a fairly incomplete explanation, as I’m not sure why good things would naturally seem more abstract. I also find this hard to cash out in any concrete cases – it’s hard to run a plausible calculation of the value of giving to Cool Earth that focuses on abstracted bad things, other than the costs which are already included.

Appendix: some examples of more or less simple calculations

A basic calculation of Cool Earth’s cost to reduce a tonne of CO2 825,919 pounds in 2012 x 6 years/(352,000 acres x 260 tonnes/acre) = 6 pence/tonne = 10 cents/tonne

If we take into account:

  • McKinsey suggests approach is relatively cost-effective (positive/neutral)
  • Academic research suggests community-led conservation is effective (positive/neutral)
  • there are plausible stories about market failures (positive/neutral)
  • The CO2 emitted by above ground sources is an underestimate (positive)
  • The 260 tonnes/acre figure comes from one area, but the other areas may differ (neutral/positive)
  • The projects ‘shield’ other areas, which are also not logged as a result (positive)
  • Cool Earth’s activities might produce other costs or benefits (neutral/positive)
  • Upcoming projects will cost a different amount to those we have looked at (neutral/positive)
  • When forestry is averted, those who would have felled it will do something else (neutral/negative)
  • When forestry is averted, the price of wood rises, producing more forestry (negative)
  • Forests are only cleared 30% less than they would be (negative)
  • The cost they claim for protecting one acre is higher than that inferred from dividing their operating costs by what they have protected (negative)
  • The forest may be felled later (negative)
  • Similarity of past and future work (neutral)
  • other effects on CO2 from averting forestry (neutral)
  • CO2 sequestered in wood in long run (neutral)
  • CO2 sequestered in other uses of cleared land (neutral)

…we get $1.34/tonne. However that was 8 positive (or positive/neutral), 4 completely neutral, and only 5 negative (or negative/neutral) modifications.

In case you wonder, the —/neutral considerations made no difference to the calculation, but appear to be if anything somewhat ‘—‘. Some of these were not small, but only considered as added support for parts of the story, so improved our confidence but didn’t change the estimate (yeah, I didn’t do the ‘update your prior on each piece of information’ thing, but more or less just multiplied the best numbers I could find or make up together).

Examples of estimates which aren’t goodness-related:

If I try to estimate the size of a mountain, will it seem smaller as I learn more? (yes)

Simple estimate: lets say it takes about 10h to climb, and I think I can walk uphill at about 2mph => 20 mile walk. Let’s say I think the angle is around 10 degrees. Then sin(10) = height/20. Then height = 20sin10 = 3.5 miles

Other considerations:

  • the side is bumpy, so I probably walk 20 miles in less than 20 miles => mountain is shorter
  • the path up the mountain is not straight – it goes around trees and rocks and things, so I probably walk 20 miles in even less than 20 miles => the mountain is shorter
  • when I measured that I could walk at about 2mph, I was probably walking the whole time, whereas when we went up the mountain, probably I really stopped sometimes to look at things, or due to confusion about the path, or whatever, so probably I can’t walk 2mph up the mountain => the walk is probably less than 20 miles. => the mountain is shorter

If I try to estimate the temperature, will it seem lower as I learn more? (neutral)

Simple estimate: I look at the thermometer, and it says a number. I think that’s the temperature.

Further considerations:

  • the thermometer could be in the shade or the sun or the wind (neutral)
  • the thermometer is probably a bit delayed, which could go either way (neutral)
  • I may misread the thermometer a bit depending on whether my eye is even with it – this could go either way (neutral)

 

Setting an example too good

Jeff Kaufman points to a kind of conflict: as he optimizes his life better for improving the world, his life looks less like other people’s lives, so makes a less good example showing that other people could also optimize their lives to make the world better. This seems similar to problems that come up often, regarding whether one should really do the (otherwise) best thing, if it will discourage onlookers from wanting to do the best thing.

These conflicts seem to be a combination of at least two different kinds of problems. Likely more, but I’ll talk about two.

One of them seems like it must be a general problem for people wanting others to follow them in doing a new thing. Since people are not doing the thing, it is weird. If you do the thing a little, you show onlookers that people like them can do it. When you do it a lot, they start to suspect you are just a freak. This might even put them off trying, since they probably aren’t the kind of person who could really succeed.

For instance, if you can kind of juggle, it suggests to an observer that they too could learn to kind of juggle. However if you can juggle fifty burning chairs, they begin to think that you are inherently weird. They also think that they are not cut out for the juggling world, since as far as they know they are not a freak.

This is a problem that both you and the observer would like to resolve – if it is really not very hard to become a good juggler, both of you would like the observer to know that.

The other kind of problem is less cooperative. Instead of observers thinking they can’t reach the extremes you have attained, they may just not want to. It looks weird, after all. You suspect that if they became half as weird as you, they would then want to be as weird as you, so want them to ‘take the gateway drug’. They may also suspect they would behave in this way, and don’t want to, and so would like to avoid becoming weird at all. At this point, you may be tempted to pretend that they would only ever get half as weird as you, because you know they would be happy to be half as weird, as long as it didn’t lead to extreme weirdness. So you may hide your weirdness. In which case you have another fairly general problem: that of wanting to deceive observers.

While there are many partial solutions to the second problem, it is a socially destructive zero sum game that I’m not sure should be encouraged. The first problem seems more tractable and useful to solve.

One way to lessen the first problem is to direct attention to a stream of people between amateur and very successful. If the person who can juggle very impressively tends to hang out with some friends at various intermediate juggling levels, it seems more plausible that these are just a spectrum of skills that people can move through in their adult lives, rather than a discrete cluster of freaks way above the rest. Another way to lessen this effect is to just explicitly claim or demonstrate that the extremal person was in fact relatively recently much like other people, or has endured few costs in their journey – this is the idea behind before/after images, and is also achieved by Jeff’s post. Another kind of solution is drawing attention to yourself before you become very extremal, so that observers can observe your progress, not just the result.

Doubt regarding basic assumptions

Robin wonders (in conversation) why apparently fairly abstract topics don’t get more attention, given the general trend he notices toward more abstract things being higher status. In particular, many topics we and our friends are interested in seem fairly abstract, and yet we feel like they are neglected: the questions of effective altruism, futurism in the general style of FHI, the rationality and practical philosophy of LessWrong, and the fundamental patterns of human behavior which interest Robin. These are not as abstract as mathematics, but they are quite abstract for analyses of the topics they discuss. Robin wants to know why they aren’t thus more popular.

I’m not convinced that more abstract things are more statusful in general, or that it would be surprising if such a trend were fairly imprecise. However supposing they are and it was, here is an explanation for why some especially abstract things seem silly. It might be interesting anyway.

Lemma 1: Rethinking common concepts, and being more abstract tend to go together. For instance, if you want to question the concept ‘cheesecake’ you will tend to do this by developing some more formal analysis of cake characteristics, and showing that ‘cheesecake’ doesn’t line up with the more cutting-nature-at-the-joints distinctions. Then you will introduce another concept which is close to cheesecake, but more useful. This will be one of the more abstract analyses of cheesecakes that has occurred.

Lemma 2: Rethinking common concepts and questioning basic assumptions look pretty similar. If you say ‘I don’t think cheesecake is a useful concept – but this is a prime example of a squishcake’, it sounds a lot like ‘I don’t believe that cheesecakes exist, and I insist on believing in some kind of imaginary squishcake’.

Lemma 3: Questioning basic assumptions is also often done fairly abstractly. This is probably because the more conceptual machinery you use, the more arguments you can make. e.g. many arguments you can make against the repugnant conclusion’s repugnance work better once you have established that aversion to such a scenario is one of a small number of mutually contradictory claims, and have some theory of moral intuitions as evidence. There are a few that just involve pointing out that the people are happy and so on, but where there are a lot of easy non-technical arguments to make against a thing, it’s not generally a basic assumption.

Explanation: Abstract rethinking of common concepts is easily mistaken for questioning basic assumptions. Abstract questioning of basic assumptions really is questioning basic assumptions. And questioning basic assumptions has a strong surface resemblance to not knowing about basic truths, or at least not having a strong gut feeling that they are true.

Not knowing about basic truths is not only a defining characteristic of silly people, but also one of the more hilarious of their many hilarious characteristics. Thus I suspect that when you say ‘I have been thinking about whether we should use three truth values: true, false, and both true and false’, it sounds a lot like ‘My research investigates whether false things are true’, which sounds like ‘I’m yet to discover that truth and falsity are mutually exclusive opposites’, which sounds a bit like ‘I’m just going to go online and check whether China is a real place’.

Some evidence to support this: when we discussed paraconsistent logic at school, it was pretty funny. If I recall, mostly of the humor took the form ‘Priest argues that bla bla bla is true of his system’ …’Yeah, but he doesn’t say whether it’s false, so I’m not sure if we should rely on it’. I feel like the premise was that Priest had some absurdly destructive misunderstanding of concepts, such that none of his statements could be trusted.

Further evidence: I feel like some part of my brain interprets ‘my research focuses on determining whether probability theory is a good normative account of rational belief’ as something like ‘I’m unsure about the answers to questions like ‘what is 50%/(50% + 25%)?”. And that part of my brain is quick to jump in and point out that this is a stupid thing to wonder about, and it totally knows the answers to questions like that.

Other things that I think may sound similar:

  • ‘my research focusses on whether not being born is as bad as dying’ <—> ‘I’m some kind of socially isolated sociopath, and don’t realize that death is really bad’
  • ‘We are trying to develop a model of rational behavior that accounts for the Allais paradox’ <—> ‘we can’t calculate expected utility’
  • ‘Probability and value are not useful concepts, and we should talk about decisions only’ <—> ‘My alien experience of the world does not prominently feature probabilities and values’
  • ‘I am concerned about akrasia’ <—> ‘I’m unaware that agents are supposed to do stuff they want to do’
  • ‘I think the human mind might be made of something like sub-agents’ <—> ‘I’m not familiar with the usual distinction of people from one another’.
  • ‘I think we should give to the most cost-effective charities instead of the ones we feel most strongly for’ <—> ‘Feelings…what are they?’

I’m not especially confident in this. It just seems a bit interesting.