The paradox was created by William Newcomb and was first published by Robert Nozick in 1969.
Imagine there is a being that has the superpower to predict your choices with high accuracy, and you know that. There are two boxes, B1 and B2. You know that B1 contains 1000 dollars and B2 carries either one million dollars or nothing. You have two choices: 1) take what is inside both the boxes or 2) only take what is in the box B. Further, it is a common knowledge that:
1) If the being predicts that you will take both the boxes, it will not add anything to box B
2) If the being knows you will only take box B, it will add a million dollars to it.
I guess you remember the definition of common knowledge: you know that he knows that you know stuff!
What will you choose?
There are two possible arguments for leading to two different decisions.
1) You know the being will read your mind and put nothing in B if you choose both the boxes and add a million if only B is chosen. So select option 2 (select box B).
2) The being has already made the decision (after reading your mind), and the only way for you to minimise the damage is to select option 1 (select both the boxes).
In polls conducted to understand their preferences, people often tied at 50:50; there are takers for both options. But why is that?
Dominance principle
Let’s first write down the payoff matrix.
The Being predicts you take B | The Being predicts you take both | |
You take Box B | 1 million | 0 |
You take both | 1 million + 1000 | 1000 |
The dominance principle states that if you have a strategy that is always better, you make a rational decision to choose that. In this case, that is taking both boxes.
Here is a thought experiment to explain this perspective. Imagine the other side of the box is transparent, and your friend is standing on that side. She can see the amount inside. Although she can’t tell you anything, what would she be hoping for? Well, if she sees that the being had put a million in box B, you would be better off taking that box and the one that carries 1000. If She finds the being did not add anything, she would still like you to take both the boxes to win the guaranteed 1000.
Expected value theory
While the expected utility theory is better suited to describe situations like these, I have gone for the expected value theory as I find it easier to explain things. We estimate the expected value of each action by multiplying the value by its probability. Imagine you trust the being is accurate at 90%, the following two calculations get you the value of your decision, and you choose what gives the highest.
You take B | 0.9 x 1,000,000 + 0.1 x 0 = 900,000 |
You take both | 0.9 x 1000 + 0.1 x 1,001,000 = 101,000 |
Therefore, you select only box B.
Newcomb’s Problem and Two Principles of Choice: Robert Nozick
Newcomb’s Paradox – What Would You Choose?: Smart by Design