How to buy a truth from a liar

Suppose you want to find out the answer to a binary question, such as ‘would open borders destroy America?’, or ‘should I follow this plan?’. You know someone who has access to a lot of evidence on the question. However you don’t trust them, and in particular, you don’t trust them to show you all of the relevant evidence. Let’s suppose that if they purport to show you a piece of evidence, you can verify that it is real. However since they can tell you about any subset of the evidence they know, they can probably make a case for either conclusion.  So without looking into the evidence yourself, it appears you can’t really employ them to inform you, because you can’t pay them more when they tell the truth. It seems to be a case of ‘he who pays the piper must know the tune’.

But here is a way to hire them: pay them for every time you change your mind on the question, within a given period. Their optimal strategy for revealing evidence to make money must leave you with the correct conclusion (though maybe not all of the evidence), because otherwise they would be able to get in one more mind-change by giving you the remaining evidence. (And their optimal strategy overall can be much like their optimal strategy for making money, if enough money is on the table).

This may appear to rely on you changing your mind according to evidence. However I think it only really requires that you have some probability of changing your mind when given all of the evidence.

Still, you might just hear their evidence, and refuse to officially change your mind, ever. This way you keep your money, and privately know the truth. How can they trust you to change your mind? If the question is related to a course of action, your current belief (a change of which would reward them) can be tied to a commitment to a certain action, were the period to end without further evidence provided. And if the belief is not related to a course of action, you could often make it related, via a commitment to bet.

This strategy seems to work even for ‘ought questions’, without the employee needing to understand or share your values.

14 responses to “How to buy a truth from a liar

  1. Really nice article. I like this a lot, one of the best ideas I’ve ever seen here, though admittedly I’m not a longtime follower.

  2. It seems like their best strategy is to structure the evidence in a way that gets you to change your mind as many times as possible, while the last change (for the truth) is insignificant in the limit. Unless rewards increase exponentially; but then, you run the risk of being easily bankrupted.

    • I mostly agree, but it doesn’t matter if the last reward is small compared to the rewards they have received so far, as long as it is greater than the amount they care about pushing you to believe one thing or the other. So it could be constant with each change, it seems to me. I agree that might be fairly expensive.

  3. I do really like this idea! At least in principle. But donreba is right; if they are clever, they can sort the order in which they feed you data points from their evidence database, such that almost every subsequent new data point pushes the argument in favour of the opposite side. Depending on the nature of the problem, they might be able to keep you dancing back and forth on the boundary of indecision, changing your belief thousands of times, before they eventually feed you the bulk of their remaining data which persuasively pushes the argument forward toward one conclusion (and which they had deliberately withheld). This is, of course, assuming you update instantly like an ideal Bayesian with each data point you receive.

    Strategies like this might be countered with a kind of “deadband of disbelief”, where you will continue to believe your old beliefs until the acquired evidence can exceed some hysteresis threshold, even though a Bayesian who weighs the evidence impartially would say your belief is ever-so-slightly incorrect. This is similar to the idea of a Schmitt trigger in electrical engineering.

    Possibly a similar method, which doesn’t abandon ideal Bayesianism, would be to apply a strong probability penalty to the most recent evidence point if it would otherwise change your beliefs, under the assumption that you’re only being given that data point selectively in order to change your mind and fork out cash, and therefore it is unlikely to be a representative sample. This would force your client to provide more convincing data points in order to keep you balanced at your belief-switching point, which means that you’ll converge on the truth more quickly.

    In either case, you’re still rather at the mercy of the size and standard deviation of the reservoir of data that they have access to, as the more outliers they possess that they can use to push your beliefs back away from the truth, the more times they can get you to change your mind. You’ll still end up with the truth, but it might be expensive.

  4. When has this strategy been implemented? Do you have any examples of when you have already used it, and in particular, for ‘ought’ questions? Does this idea have any implications for actors in prediction markets?

  5. Evan, I’m trying to imagine how this might work for a prediction market… Imagine a market in which people holding shares on one side are rewarded each time the price crosses 50% (since 51% can be interpreted as ‘changing one’s mind to believe in X’ and 49% vice versa); then the price will fluctuate from 51 to 49% repeatedly as people look at the price and estimate whether to ‘reveal’ more evidence by buying & forcing the price back over the line in the direction of what they believe is true. Certainly would be odd and sounds inefficient compared to normal prediction market activity.

    Wouldn’t it be better to just continue paying out under a proper scoring rule? If you can say ‘ah, you just changed my mind’, presumably you can also do something like specify your distribution and pay based on how much the liar manages to compress your distribution.

    > If the question is related to a course of action, your current belief (a change of which would reward them) can be tied to a commitment to a certain action, were the period to end without further evidence provided. And if the belief is not related to a course of action, you could often make it related, via a commitment to bet.

    This only incentivizes enough revelation to change that specific action. For example, if the liar knows the true probability is 1% but you change your action at 49% (perhaps the two courses are equally costly/beneficial), then they only need to feed you enough information to leave you at 49% and take action B rather than A. This is a problem because if you knew it really was as low as 1%, you might take actions C..Z which are important but too hard to bet on or define in advance and so from the liar’s perspective, C..Z are positive externalities.

  6. Also, if the liar has an interest in leaving us misinformed and the incentive is greater than the reward we offer for giving us that last bit of evidence, then their optimal strategy is to withhold it.

  7. You can take any piece of info and break it into an arbitrary number of smaller pieces of info that add up to the original. So if they could get you very near the border of changing your mind, and then move you back and forth with tiny info, they could get an arbitrary amount from you.

  8. This is reminiscent of the mistake bound model in machine learning. It’s an online learning setting: at each step the learner is given a training example, returns a predicted classification for it, and then is told the true classification in order to update its hypothesis for the next step.

    The mistake bound model asks: for some concept class C and online learner A, what’s the maximum (over all concepts c in C and all sequences of examples consistent with c) number of wrong classifications A outputs.

    For finite hypothesis classes, there’s a simple algorithm that reaches a mistake bound of log2(|C|): keep track of the set of non-falsified hypotheses and predict according to a majority vote. There are also infinite classes learnable with a mistake bound — e.g. linear classifiers with margin at least γ and data points with magnitude at most r.

    So the liar is the adversary who supplies you with examples, and your mistake bound is his maximum payout. But (as usual) in the real world you don’t know in advance what concept space the truth lies in. And the liar can turn random noise in the examples available to him into adversarial noise. (For random noise we might still hope to make a number of mistakes that grows slowly with the number of examples seen; but for adversarial noise the picture is even grimmer.)

    This is all, admittedly, a special case; your idea applies to a wider range of propositions being questioned, a broader idea of what counts as data, and a variety of different ways of determining whether you’ve changed your mind. Still, I thought it interesting that one instance of the scheme had already been investigated under a different guise.

  9. A natural instantiation is to pay based on the squared change (P(X|E1) – P(X|E2))^2. If you are rational, it turns out the total cost is bounded. Moreover, this mechanism seems to be strategy-proof: breaking up evidence into smaller pieces doesn’t help. (E.g. if you have 10 independent pieces of evidence, revealing them all at once gives you the same payoff as revealing them one at a time.)

    It would be good to think more about this scheme, and to really understand the Bayesian equilibria in the game.

  10. Pingback: AN #108 为何需要仔细检查人工智能风险的争论 – AGI Watchful Guardian

Comment!

This site uses Akismet to reduce spam. Learn how your comment data is processed.