The mistake bound model asks: for some concept class C and online learner A, what’s the maximum (over all concepts c in C and all sequences of examples consistent with c) number of wrong classifications A outputs.

For finite hypothesis classes, there’s a simple algorithm that reaches a mistake bound of log2(|C|): keep track of the set of non-falsified hypotheses and predict according to a majority vote. There are also infinite classes learnable with a mistake bound — e.g. linear classifiers with margin at least γ and data points with magnitude at most r.

So the liar is the adversary who supplies you with examples, and your mistake bound is his maximum payout. But (as usual) in the real world you don’t know in advance what concept space the truth lies in. And the liar can turn random noise in the examples available to him into adversarial noise. (For random noise we might still hope to make a number of mistakes that grows slowly with the number of examples seen; but for adversarial noise the picture is even grimmer.)

This is all, admittedly, a special case; your idea applies to a wider range of propositions being questioned, a broader idea of what counts as data, and a variety of different ways of determining whether you’ve changed your mind. Still, I thought it interesting that one instance of the scheme had already been investigated under a different guise.

]]>http://slatestarcodex.com/2013/05/19/can-you-condition-yourself/

]]>I am quite happy that a mixture of randomness and determinism could give me free will.

]]>Since the absence of God makes these differences, the presence of God,as guarantor of justice, and ultimate safety net, makes difference too.

]]>