Consider the following very simple game: a Bernoulli trial (a trial which results in one of two possible outcomes, labelled “success” and “failure”) is carried out with success probability . Beforehand, you are told the value of and asked to give a definite prediction of the trial’s outcome. That is, you have to predict either success or failure; just saying “the probability of success is ” is not enough. You win if and only if you predict the correct outcome.

Here are two reasonable-sounding strategies for this game:

- If , predict success. If , predict failure. If , predict success with probability 0.5 and failure with probability 0.5.
- Predict success with probability and failure with probability .

In game-theoretic language, the difference between strategies 1 and 2 is that strategy 1 involves the use of a pure strategy if possible, i.e. one in which the choice of what to predict is made deterministically, while strategy 2 is always mixed, i.e. the choice of what to predict is made randomly.

But which is better? Note that the answer may depend on the value of . Try to think about it for a minute before moving on to the next paragraph.

If , then the strategies are identical and therefore both equally good.

If , let be the probability of the more probable outcome (i.e. if and if ). If the more probable outcome happens, then you win for sure under strategy 1 but you only have probability of winning under strategy 2. If the less probable outcome happens, then you lose for sure under strategy 1 but you still have probability of winning under strategy 2. Therefore the probability of winning is under strategy 1 and under strategy 2. So strategy 1 is better than strategy 2 if and only if

i.e.

This quadratic inequality holds if and only if . But is the probability of the more probable outcome, and therefore for sure. Therefore, strategy 1 is always better if .

I find this result weird and a little counterintuitive when it’s stated so abstractly. It seems to me like the most natural way of obtaining a definite value from the distribution—drawing randomly from the distribution—should be the best one.

But I guess it does makes sense, if you think about it as applying to a concrete situation. For example, if you were on a jury and you thought there was a probability that the defendant was guilty, it would be crazy to then flip 10 coins and precommit to arguing for the defendant’s guilt if every one of them came up heads. The other jurors would think you were mad (and probably be very angry with you, if they did all come up heads).

The result has interesting implications for how people should act on their beliefs. If you believe that degrees of belief can be usefully modelled as probabilities, and you try to apply this in everyday reasoning, you will often be faced with the problem of deciding whether to act in accordance with a belief’s truth even if you only place a certain probability on that belief being true. Should you always act in accordance with the belief if , or should you have probability of acting in accordance with it at any given time? Until I wrote this post it wasn’t obvious to me, but the result in this post suggests you should do the former.

I do wonder if there is anything strategy 2 is good for, though. Comment if you have an idea!