Consider the following very simple game: a Bernoulli trial (a trial which results in one of two possible outcomes, labelled “success” and “failure”) is carried out with success probability . Beforehand, you are told the value of and asked to give a definite prediction of the trial’s outcome. That is, you have to predict either success or failure; just saying “the probability of success is ” is not enough. You win if and only if you predict the correct outcome.

Here are two reasonable-sounding strategies for this game:

- If , predict success. If , predict failure. If , predict success with probability 0.5 and failure with probability 0.5.
- Predict success with probability and failure with probability .

In game-theoretic language, the difference between strategies 1 and 2 is that strategy 1 involves the use of a pure strategy if possible, i.e. one in which the choice of what to predict is made deterministically, while strategy 2 is always mixed, i.e. the choice of what to predict is made randomly.

But which is better? Note that the answer may depend on the value of . Try to think about it for a minute before moving on to the next paragraph.

If , then the strategies are identical and therefore both equally good.

If , let be the probability of the more probable outcome (i.e. if and if ). If the more probable outcome happens, then you win for sure under strategy 1 but you only have probability of winning under strategy 2. If the less probable outcome happens, then you lose for sure under strategy 1 but you still have probability of winning under strategy 2. Therefore the probability of winning is under strategy 1 and under strategy 2. So strategy 1 is better than strategy 2 if and only if

i.e.

This quadratic inequality holds if and only if . But is the probability of the more probable outcome, and therefore for sure. Therefore, strategy 1 is always better if .

I find this result weird and a little counterintuitive when it’s stated so abstractly. It seems to me like the most natural way of obtaining a definite value from the distribution—drawing randomly from the distribution—should be the best one.

But I guess it does makes sense, if you think about it as applying to a concrete situation. For example, if you were on a jury and you thought there was a probability that the defendant was guilty, it would be crazy to then flip 10 coins and precommit to arguing for the defendant’s guilt if every one of them came up heads. The other jurors would think you were mad (and probably be very angry with you, if they did all come up heads).

The result has interesting implications for how people should act on their beliefs. If you believe that degrees of belief can be usefully modelled as probabilities, and you try to apply this in everyday reasoning, you will often be faced with the problem of deciding whether to act in accordance with a belief’s truth even if you only place a certain probability on that belief being true. Should you always act in accordance with the belief if , or should you have probability of acting in accordance with it at any given time? Until I wrote this post it wasn’t obvious to me, but the result in this post suggests you should do the former.

I do wonder if there is anything strategy 2 is good for, though. Comment if you have an idea!

One answer: strategy 2 is better if two people make their own predictions, and the win is if at least one person makes the correct prediction. Because under strategy 1, both people will make the same prediction (as long as ), so it’s effectively the same as if there’s only one person and the probability of winning is . But under strategy 2, the probability of winning is

which simplifies to

The inequality

is equivalent to

which is true for every .

But of course, strategy 2 isn't the

optimalone in this situation. The optimal strategy is for one person to predict success and one person to predict failure, so that winning is certain. But perhaps it’s optimal if both players have to use the same strategy. I haven’t checked this yet.Another use of strategy 2 would be protection in the case your model is

reallywrong. If you think you’re playing (p, 1-p) with p>1/2 but really you’re playing (q,1-q) with q < 1/2 then the mixed strategy outperforms the pure strategy. Could be useful if you're making a guess under conditions of uncertainty where the probability is close to a half.If you’re not told the exact success probability, but rather told that the success probability was randomly generated from a given probability distribution over $[0, 1]$, then the probability that the trial succeeds is

which is the same as the mean of . The probability that the trial fails is

which is the same as 1 minus the mean of . So I think there’s no effective difference here from playing the game with the success probability given and equal to the mean of , and that means strategy 1 is still better (with “the mean of ” in place of “”), once the whole uncertainty is accounted for.

I was thinking about responding on tumblr or something, but I never got around to putting together something more comprehensive. Although the result in counterintuitive (at least, it was for me!), an easy way to see it without grinding through several equations is to just graph it. One of them is a parabola and the other is an absolute-value graph. They coincide at the minimum and at the extreme edges (p=0 and p=1). The parabola obviously is lower than the absolute-value graph (although it’s overtaken by the parabola outside of the p \in [0, 1] interval that we care about).