A most ingenious paradox!
Maybe lots of people are busy travelling, or maybe the set-up of the question itself just wasn’t that interesting, or maybe I have a lot fewer readers than I used to. Only two people responded to my set-up of Newcomb’s Paradox. I was hoping a lively debate would emerge… ah, well. I’ll go ahead and explain what’s paradoxical about it, now. Here’s the puzzle, quoted from my last entry about this puzzle:
Here’s the story of Newcomb’s Paradox:
There is a person, “The Predictor”, who has an excellent understanding of human psychology. The Predictor presents you with a table, on which sit two boxes. One box is opaque, and you cannot see what is inside. The other box is transparent, and you can see a $1000 bill inside it. The Predictor offers you a choice: you may:
- (A) take the black box, including its contents, and leave the transparent box, or
- (B) take both boxes.
But before you choose the Predictor tells you: “I have observed you carefully, and consulted the best psychological theories, and I have already used my vast expertise to predict which choice you will make. I won’t tell you what my prediction was. But I will tell you this I set up the boxes according to the following rule: if I predict that you will take just the black box, (A), then I put a million dollars inside that box. And if I predict that you will take both boxes (B), then I put nothing inside the black box. Either way, as you can see, there is a thousand dollars in the transparent box.”
The Predictor is very good at predicting. In fact, you’ve observed him present ten other people with this exact choice, and every time, his prediction was correct.
The question is simple: you are faced with a choice between (A) and (B) — you can take the black box alone, or you can take both boxes. Which is the rational choice? What should you do?
Some of you know my position on this issue, but I’m keeping quiet for now. Jeremy is a two-boxer, and I think that the other noter, whom I don’t know, is too. But there is an argument for one-box-ism, which many people find compelling. Consider the argument from expected utility: basically, it tells you that when presented with two options, you should select the option for which you will have the greatest expected utility — that is, you should pick choice 1 if and only if people who pick choice 1 are more likely to end up better off than those who pick choice 2. This principle suggests that you should take only one box. Why? Simple: people who take only one box end up with a million dollars, where people who take both boxes end up with only a thousand. Insofar as you’d rather have a million dollars than a thousand dollars, you should take just one box. After all, everyone who takes a one box ends up a millionaire, and you have no reason to suppose that you’ll be an exception. This biconditional seems very likely to be true: you will become a millionaire if and only if you take just one box.
But there is another compelling argument from dominance. The idea is this: given two choices (1) and (2), if no matter what other factors obtain, (1) leaves you better off than (2), then you should take (1). This seems to suggest two-box-ism, for exactly the reason Jeremy suggested: “His prediction is made, choosing one way or the other does not affect his prediction in the slightest.” If he’s predicted that you will take both, then there’s nothing in the black box, and you have a choice between getting nothing and getting a thousand dollars. And if he’s predicted that you’ll only take one, then you’re choosing between a million and 1.001 million. So either way, you’re better off by a thousand if you take both.
In my experience, these two competing arguments divide philosophers pretty evenly. So far, the nonphilosophers I’ve talked to have been mostly two-boxers. Do any of you find one-box-ism compelling? The principle of expected utility is supposed to be a good decision procedure. So is the dominance principle. So what’s going on in this case? How can two good principles deliver contradictory results?
Well, that’s why it’s a paradox.
I have this instinct that this all has something to do with free-will/determinism compatiblism. Suppose God knows everything I’m going to do; does that imply that I have no free will? I think that an incompatablist — a person who believes that this DOES mean I have no free will — is more likely to be a two-boxer. One-box-ism seems to fit with compatablism. I’m not sure how to spell out the analogy, but I have a strong instinct that says they’re related. I don’t much care for strong instincts, but there it is.
I just happened along by chance. As soon as I read it, i thought 1 box. 1000 is good but 1000000 is better. Those that picked 2 are a classic example of the common theory…A bird in hand is better than 2 in the bush or perhaps even fitting would be they believe in dancing with the devil they know. hope that makes sense….be well 🙂
Warning Comment
I have to admit, my first instinct was to go two fisted and grab both boxes. I guess I’m really into the whol “devil you know” bit. Or else I’ve been disappointed far too often reaching for the black box, in all its forms. 🙂
Warning Comment
After the first entry, I was a two-boxer. After this one, I could be a one-boxer, because you’ve said that most non-philosophers and about half of the philosophers are two-boxers, and studies in behavioral decision-making corroborate the fact that pretty much everyone is risk-averse when it comes to gains. So, knowing that, I know approximately what the Predictor knows:
Warning Comment
No, never mind, I got mixed up. I’m still a two-boxer. How can this decision be a formally rational or utilitarian decision at all when you don’t know the prior probabilities and you don’t have the Predictor’s empirical evidence? Also, this reminds me of Turing’s Halting Problem. If the two are isomorphic, then there’s no such thing as a predictor that gets it right all the time.
Warning Comment
I think I make my decision based on the utility argument. I expect that he knows that as long as I believe him about his ability to predict the decision I will make, I will make the decision that benefits me the most – that is, take only the black box. Incidentally, for other reasons, I think I am a compatibilist. -s.
Warning Comment
Dan, The puzzle’s not supposed to depend on the predictor being perfectly reliable. Even if he only gets it right 99% of the time, both arguments still lead to opposite conclusions. I’m not sure I see how the halting problem is related. People always make one choice or the other… there’s never a question of their deliberation halting.
Warning Comment
As for prior probabilities and such, don’t they cancel out? Suppose we know that the predictor is right 99% of the time, with no correlation between the choice and the error (we can learn this by watching him make hundreds of predictions). The argument for expected utility still goes through for one-boxing. And the dominance argument doesn’t use probabilities at all — just bivalence.
Warning Comment
I’m all for one-boxing it. The guy’s record for guessing correctly seems to be good enough to suggest I’m going to get the million. Two-boxing hedges your bet but still places your greatest hope in him guessing wrong. Me, I’m betting he guesses right.Also, I’m a non-compatiblist non-determinist, but I think you knew that about me already.
Warning Comment
Ah, glad to find a couple of one-boxers. Broom, the argument for two-boxism isn’t supposed to be betting that the Predictor might be wrong. The argument goes, “whether he’s wrong or not, don’t you want that extra $1,000?”.
Warning Comment
i am somewhat of an incompatiblist (sp) but i was also a ‘one boxer’ o well
Warning Comment
quick question — I was confused but I think I get it. are you saying that, the prediction is made already, and THEN he tells you what’s in each box…. and then you choose, but you might not necessarily get 1 million dollars if you pick just one box, because before you KNEW that it had a million dollars, you would have picked 2 because you’re naturally selfish?
Warning Comment
btw, I thought “Well, I’d wanna take 2 because then I’ll get more—WAAAAIT, no, that’s exactly what they want me to pick. Picking 2 results in something bad.” So I’m not sure if that makes me a 2-boxer or 1-boxer, cause I would take 1 based on logic, not because I’m unselfish.. cause I am indeed selfish.
Warning Comment
Actually, I guess the predictor would go by my desire to pick 2…. not my logical choice of one. eh.
Warning Comment
The expected utility argument seems a little flawed to me. Basically, it’s saying that because choice x correlates with result y, that you should choose x in order to get y. However, what is lacking is any sort of causation. People who get the million dollars don’t get it because they only took one box, they got it because they are the sort of person who only takes one box.
Warning Comment
Therefore, if I take one box because I hope to get the same result these other people did, then I in fact will not be getting it, because I did no take the box for the same reason they did, and it is that reason which is the real cause of y.
Warning Comment
tsurt – my instinct would be to go for the one box, option (A). I have no idea why, but before i read this entry, that is what came to mind, and for reasons that i can neither define or know myself, option (A) it is. I also like the way that you think, and the theories that you have presented.. reminds me of me.
Warning Comment
wouldnt it essentially come down to are u more of a risk-taker or are you just naturally selfish? thats how I see it at least x.x
Warning Comment