… pour juger de ce que l'on doit faire pour obtenir un bien ou pour éviter un mal,
il ne faut pas seulement considérer le bien & le mal en soi,
mais aussi la probabilité qu'il arrive ou n'arrive pas;
& regarder géometriquement la proportion que toutes ces choses ont ensembles …

- Antoine Arnauld & Pierre Nicole's (1662, IV, 16) La logique, ou l'art de penser in the original French

… to judge what one ought to do to obtain a good or avoid an evil,
one must not only consider the good and the evil in itself,
but also the probability that it will or will not happen;
and view geometrically the proportion that all these things have together …
- Jeffrey's (1981, p. 473) translation

## Newcomb's Problem

1. Newcomb's problem is named after William Newcomb (a physicist), by whom it was first formulated
2. Newcomb's problem was then presented by Robert Nozick (1969) as a dilemma in a decision theoretic context

3. Background:
4. Box 1 is transparent and contains \$1,000
5. Box 2 is opaque and contains either \$1,000,000 or nothing

6. As a human agent, you may choose 1 of 2 possible strategies:
7. φ1: Take Box 2 only
8. φ2: Take Box 1 and Box 2

9. The daemon predictor (e.g. an artificial superintelligence, a highly intelligent being from another planet, etc) may choose 1 of 2 possible strategies:
10. φ3: Put \$1,000,000 in Box 2
11. φ4: Put nothing in Box 2

12. In addition, both the daemon predictor and you as the human agent know the following:
13. PREDICTION 1: If the daemon predictor predicts that you will choose φ2 and take both Box 1 and Box 2, then it will choose φ4 and put nothing in Box 2
14. PREDICTION 2: If the daemon predictor predicts that you will choose φ1 and take Box 2 only, then it will choose φ3 and put \$1,000,000 in Box 2

15. As the daemon predictor will make its move (in favour of either φ3 or φ4) before you, its PREDICTIONS may be represented as states into which the world has been partitioned
16. Let s1 denote the state into which the world has been partitioned by PREDICTION 1
17. Let s2 denote the state into which the world has been partitioned by PREDICTION 2

18. The possible states from this scenario are:
19. s3: You end up with \$0
20. You pick φ1 after the daemon predictor selects φ4, goes for PREDICTION 1 (incorrectly), and partitions the world into s1)

21. s4: You end up with \$1,000
22. You pick φ2 after the daemon predictor selects φ4, goes for PREDICTION 1 (correctly), and partitions the world into s1)

23. s5: You end up with \$1,000,000
24. You pick φ1 after the daemon predictor selects φ3, goes for PREDICTION 2 (correctly), and partitions the world into s2)

25. s6: You end up with \$1,001,000
26. You pick φ2 after the daemon predictor selects φ3, goes for PREDICTION 2 (incorrectly), and partitions the world into s2)

27. Let outcome o13 denote the act-state pair φ1-s3 (i.e. one-boxing and \$0)
28. Let outcome o24 denote the act-state pair φ2-s4 (i.e. two-boxing and \$1,000)
29. Let outcome o15 denote the act-state pair φ1-s5 (i.e. one-boxing and \$1,000,000)
30. Let outcome o26 denote the act-state pair φ2-s6 (i.e. two-boxing and \$1,001,000)

31. Q: Would the rational choice be in favour of φ1 (viz. take Box 2 only) or φ2 (take Box 1 and Box 2)?

1. Suppose that an action φi yields n mutually exclusive outcomes o11, o12, …, o1n
2. These n mutually exclusive outcomes have the corresponding utility values u(o11), u(o12), …, u(o1n)

### 2 Conflicting Principles of Choice

The principle of maximizing expected utility
1. According to the principle of maximizing expected utility:
2. ### EU(φi) = ΣP(oij) × u(oij)

3. 1st line of reasoning in accordance with the principle of maximizing expected utility:
4. If you pick φ2 and take what is in Box 1 and Box 2, then the daemon predictor would have predicted this with PREDICTION 1
5. ∴ The predictor would have picked φ4 and put nothing in Box 2
6. ∴ You will probably end up with \$1,000

7. Conversely, if you pick φ1 and take what is in Box 2 only, then the daemon predictor would have predicted this with PREDICTION 2
8. ∴ The predictor would have picked φ3 and put \$1,000,000 in Box 2
9. ∴ You will probably end up with \$1,000,000

10. ∴ RECOMMENDATION 1 according to the principle of maximizing expected utility:
You should one-box (i.e. pick φ1 and take Box 2 only)

The dominance principle
1. According to the dominance principle:
2. If there is a partition of states of the world such that relative to it, action φi weakly dominates action φk, then φi should be performed rather than φk

3. 2nd line of reasoning in accordance with the dominance principle:
4. The daemon predictor has already made its prediction (PREDICTION 1 or PREDICTION 2) and either placed \$1,000,000 in Box 2 (φ3) or not done so (φ4)
5. The daemon predictor has already left
6. \$1,000,000 is either in Box 2 or it is not
7. If the money is already there, it will stay there and it is not going to disappear
8. If the money is not already there, it is not going to suddenly appear if you pick φ1 and take Box 2 only

9. The world has already been partitioned into either s1 (by PREDICTION 1) or s2 (by PREDICTION 2)
10. Relative to s1 (in which the daemon predictor puts nothing in Box 2), two-boxing (yielding outcome o24 and \$1,000) dominates one-boxing (yielding outcome o13 and \$0)
11. Relative to s2 (in which the daemon predictor puts \$1,000,000 in Box 2), two-boxing (yielding outcome o26 and \$1,001,000) dominates one-boxing (yielding outcome o15 and \$1,000,000)
12. There are no other possible states into which the world has been partitioned
13. ∴ Two-boxing strictly dominates one-boxing

14. ∴ RECOMMENDATION 2 according to the dominance principle:
You should two-box (i.e. pick φ2 and take Box 1 and Box 2)
15. After all, why should you pass up on the \$1,000 in Box 1 that you can clearly see?

1. HORN 1 of the dilemma: RECOMMENDATION 1 (one-box, pick φ1, and take Box 2 only)
2. HORN 1 is supported by the principle of maximizing expected utility
3. However, HORN 1 violates another principle of choice: the dominance principle

4. HORN 2 of the dilemma: RECOMMENDATION 2 (two-box, pick φ2, and take Box 1 and Box 2)
5. HORN 2 is supported by the dominance principle
6. However, HORN 2 violates another principle of choice: the principle of maximizing expected utility

7. ∴ Whichever horn of the dilemma (HORN 1 or HORN 2) you pick, you will end up violating a principle of choice