… pour juger de ce que l'on doit faire pour obtenir un bien ou pour éviter un mal,
il ne faut pas seulement considérer le bien & le mal en soi,
mais aussi la probabilité qu'il arrive ou n'arrive pas;
& regarder géometriquement la proportion que toutes ces choses ont ensembles …
- Antoine Arnauld & Pierre Nicole's (1662, IV, 16) La logique, ou l'art de penser in the original French
… to judge what one ought to do to obtain a good or avoid an evil,
one must not only consider the good and the evil in itself,
but also the probability that it will or will not happen;
and view geometrically the proportion that all these things have together …
- Jeffrey's (1981, p. 473) translation
Newcomb's Problem
Newcomb's problem is named after William Newcomb (a physicist), by whom it was first formulated
Newcomb's problem was then presented by Robert Nozick (1969) as a dilemma in a decision theoretic context
Background:
Box 1 is transparent and contains $1,000
Box 2 is opaque and contains either $1,000,000 or nothing
As a human agent, you may choose 1 of 2 possible strategies:
φ_{1}: Take Box 2 only
φ_{2}: Take Box 1 and Box 2
The daemon predictor (e.g. an artificial superintelligence, a highly intelligent being from another planet, etc) may choose 1 of 2 possible strategies:
φ_{3}: Put $1,000,000 in Box 2
φ_{4}: Put nothing in Box 2
In addition, both the daemon predictor and you as the human agent know the following:
PREDICTION 1: If the daemon predictor predicts that you will choose φ_{2} and take both Box 1 and Box 2, then it will choose φ_{4} and put nothing in Box 2
PREDICTION 2: If the daemon predictor predicts that you will choose φ_{1} and take Box 2 only, then it will choose φ_{3} and put $1,000,000 in Box 2
As the daemon predictor will make its move (in favour of either φ_{3} or φ_{4}) before you, its PREDICTIONS may be represented as states into which the world has been partitioned
Let s_{1} denote the state into which the world has been partitioned by PREDICTION 1
Let s_{2} denote the state into which the world has been partitioned by PREDICTION 2
The possible states from this scenario are:
s_{3}: You end up with $0
You pick φ_{1} after the daemon predictor selects φ_{4}, goes for PREDICTION 1 (incorrectly), and partitions the world into s_{1})
s_{4}: You end up with $1,000
You pick φ_{2} after the daemon predictor selects φ_{4}, goes for PREDICTION 1 (correctly), and partitions the world into s_{1})
s_{5}: You end up with $1,000,000
You pick φ_{1} after the daemon predictor selects φ_{3}, goes for PREDICTION 2 (correctly), and partitions the world into s_{2})
s_{6}: You end up with $1,001,000
You pick φ_{2} after the daemon predictor selects φ_{3}, goes for PREDICTION 2 (incorrectly), and partitions the world into s_{2})
Let outcome o_{13} denote the act-state pair φ_{1}-s_{3} (i.e. one-boxing and $0)
Let outcome o_{24} denote the act-state pair φ_{2}-s_{4} (i.e. two-boxing and $1,000)
Let outcome o_{15} denote the act-state pair φ_{1}-s_{5} (i.e. one-boxing and $1,000,000)
Let outcome o_{26} denote the act-state pair φ_{2}-s_{6} (i.e. two-boxing and $1,001,000)
Q: Would the rational choice be in favour of φ_{1} (viz. take Box 2 only) or φ_{2} (take Box 1 and Box 2)?
2 conflicting principles of choice
Suppose that an action φ_{i} yields n mutually exclusive outcomes o_{11}, o_{12}, …, o_{1n}
These n mutually exclusive outcomes have the corresponding utility values u(o_{11}), u(o_{12}), …, u(o_{1n})
2 Conflicting Principles of Choice
The principle of maximizing expected utility
According to the principle of maximizing expected utility:
EU(φ_{i}) = ΣP(o_{ij}) × u(o_{ij})
1^{st} line of reasoning in accordance with the principle of maximizing expected utility:
If you pick φ_{2} and take what is in Box 1 and Box 2, then the daemon predictor would have predicted this with PREDICTION 1
∴ The predictor would have picked φ_{4} and put nothing in Box 2
∴ You will probably end up with $1,000
Conversely, if you pick φ_{1} and take what is in Box 2 only, then the daemon predictor would have predicted this with PREDICTION 2
∴ The predictor would have picked φ_{3} and put $1,000,000 in Box 2
∴ You will probably end up with $1,000,000
∴ RECOMMENDATION 1 according to the principle of maximizing expected utility:
You should one-box (i.e. pick φ_{1} and take Box 2 only)
The dominance principle
According to the dominance principle:
If there is a partition of states of the world such that relative to it, action φ_{i} weakly dominates action φ_{k}, then φ_{i} should be performed rather than φ_{k}
2^{nd} line of reasoning in accordance with the dominance principle:
The daemon predictor has already made its prediction (PREDICTION 1 or PREDICTION 2) and either placed $1,000,000 in Box 2 (φ_{3}) or not done so (φ_{4})
The daemon predictor has already left
$1,000,000 is either in Box 2 or it is not
If the money is already there, it will stay there and it is not going to disappear
If the money is not already there, it is not going to suddenly appear if you pick φ_{1} and take Box 2 only
The world has already been partitioned into either s_{1} (by PREDICTION 1) or s_{2} (by PREDICTION 2)
Relative to s_{1} (in which the daemon predictor puts nothing in Box 2), two-boxing (yielding outcome o_{24} and $1,000) dominates one-boxing (yielding outcome o_{13} and $0)
Relative to s_{2} (in which the daemon predictor puts $1,000,000 in Box 2), two-boxing (yielding outcome o_{26} and $1,001,000) dominates one-boxing (yielding outcome o_{15} and $1,000,000)
There are no other possible states into which the world has been partitioned
∴ Two-boxing strictly dominates one-boxing
∴ RECOMMENDATION 2 according to the dominance principle:
You should two-box (i.e. pick φ_{2} and take Box 1 and Box 2)
After all, why should you pass up on the $1,000 in Box 1 that you can clearly see?
HORN 1 of the dilemma: RECOMMENDATION 1 (one-box, pick φ_{1}, and take Box 2 only)
HORN 1 is supported by the principle of maximizing expected utility
However, HORN 1 violates another principle of choice: the dominance principle
HORN 2 of the dilemma: RECOMMENDATION 2 (two-box, pick φ_{2}, and take Box 1 and Box 2)
HORN 2 is supported by the dominance principle
However, HORN 2 violates another principle of choice: the principle of maximizing expected utility
∴ Whichever horn of the dilemma (HORN 1 or HORN 2) you pick, you will end up violating a principle of choice