Slate Star Codex writes about a patient (or patient amalgam) who was suicidal, apparently for want of a few thousand dollars:
…So what bothered me is that psychiatric hospitalization costs about $1,000 a day. Average length of stay for a guy like him might be three to five days. So we were spending $5,000 on his psychiatric hospitalization, which was USELESS, so that we could send him out and he could attempt suicide again…
…Problem is, you don’t have to be an economics PhD to realize that “give $5,000 to anyone who attempts suicide and says they need it” might create some bad incentives.
I have no good solution to this…
I’m curious about solutions to this. However I’m going to talk about a slightly different situation, where the person in question is driven by desperation to be in a drug experiment which will make the rest of their life of neutral value. The drug, Neutrazine, has no social value, and is being trialed for entirely morally neutral reasons.
So we want to be able to give people a few thousand dollars at times when their not taking Neutrazine is worth more than a few thousand dollars to us, and where a few thousand dollars would be enough to keep them away from Neutrazine, without causing them to get into such situations more readily, or to lie to you about whether they are really so badly off that they would take Neutrazine.
This sounds kind of hopeless: if you are willing to rescue people in a bad situations, and they know this ahead of time, surely some people will get into bad situations more and/or lie about it.
Lets start with just the problem of people taking more risks, knowing that you will save them. That is, suppose they will honestly report their values.
This actually seems like a case where moral hazard should be avoidable. The person in question has the option to make their life worth nothing using Neutrazine, from any initial level of value. This is worth a positive amount to them, in the cases where you are hoping to help them. But if you give them just the same positive amount in money, this also makes their life neutral and takes the Neutrazine option off the table (because it would do nothing). So it doesn’t change the expected value from their perspective at all, and thus doesn’t influence their decisions ahead of time. Yet it is much better from your perspective, because you valued their life a lot more than a few thousand dollars.
This might seem unsatisfactory, in that you got all the gains. However you could give them some gains, without influencing their behavior much. Also, there may be gains to their future self that were discounted more than you would like. And it might be that a person joining a Neutrazine trial will tend to be underestimating their future opportunities (due to the selection effect), so it is better for their life to be neutral according to their expectations than guaranteed to be neutral, on average.
This isn’t a solution, because it requires you to know how valuable things are to the other person. As mentioned earlier, they can just tell you their life is worse than it is. People whose lives are not bad at all can claim they are going on Neutrazine. Partial solutions to this could come from mechanism design, neuroimaging or lie detection. I’ll talk about the mechanism design option.
We have a collection of people whose lives have varying degrees of value to them. We would like to distinguish them, but they all look the same. One obvious difference is their willingness to join a Neutrazine trial. Once we have an action like this, that people with worse lives are more willing to take, we can use it to construct a choice that people will make differently, and which will also differentially help those who need it.
Here is an imperfect one: offer a bundle of $1,000 and a %10 chance of joining a Neutrazine trial. This is of negative value for people whose lives have more than $9,000 of value to them, and positive for those whose lives are worse than that. This isn’t great, in that you help some people who are less desperate, and you can only help people a small amount, but it seems better than the apparent status quo.
Can you design something better?