• Pages

 bob on Proof that Humans Are Evi… Mike on Viking Metal Meets Viking… sir on Set Algebra: A Quick Reference… Crystal on Proof that Humans Are Evi… Felicis on Viking Metal Meets Viking…

Free Will, God, and Newcomb’s Problem: Part 2

In the last post on Newcomb’s problem and Free Will, we went over a 2 person game to be played with God. The key problem we were facing was whether or not Free will can coexist with an all knowing being that has powerful predictive capabilities. That is, if God always knows what you are going to do, and is in fact able to change the future based on what he knows about your actions now, are you really free?

The game went as follows.

There are 2 boxes in front of you: in box 1 is \$1,000; in box 2 is EITHER \$1,000,000 or nothing! (Based on a requirement below). You have 2 “moves” that are legal in the game: Either you take both boxes; or you take ONLY box # 2. Taking only box #1 is NOT an option.

What is inside of box number 2 is based on whether or not God was able to PREDICT your current move. Yesterday, God made a prediction (we arbitrarily assume god is about 90% correct generally), about which option you would (today) choose. If God predicted that you’d choose to take both boxes, then he left the second box empty: You get only \$1,000. If God predicted that you’d choose only box #2, then he would have put in the million bucks.

The game then, from your point of view, is really about your ability to predict God’s ability to predict your upcoming choice, which is dependent on your prediction! There in lies the dilemma. Is your eventual payoff predetermined by God, or is it based on your free will and is God just guessing?

There are two key arguments to be made as to how to play. The first is the Expected Value argument. The second based on the Dominance principle.

Argument 1:

If you take both boxes, then clearly God would have predicted as such, so he would have left box #2 empty. But, if you take only box #2, then God would have likely predicted that and put in the cool million. You like \$1,000,000 more that the measly \$1,000, so you should take only box #2.

Argument 2:

Since God chose whatever he chose yesterday, then what you do today is completely independent of what God thought then. That is, your choice won’t change it. If the million is inside box 2, then it’s inside box 2. Your deciding to choose it or not is irrelevant. On the other hand, if he didn’t put the cash inside box 2, then it’s not there, period. Your choice won’t change anything. So, since you’d rather have \$1,000 than have nothing, then you should choose option 1 and take both boxes. And if by happy luck, God predicted wrong, you’ll get an extra mil. Sweet! You can’t go wrong.

Now let us analyze these arguments more closely. The first uses the expected value principle:

(90%)x(1000) + (10%)x(1,001,000) = 101,000 for option 1

(10%)x(0) + (90%)x(1,000,000) = 900,000 for option 2

Clearly with this in mind, you’d want option 2, to take box 2 only. It makes the most sense from a pure Expected value argument. Therefore we have a good Game Theoretic reason to choose option 2.

BUT, the second argument takes a different stand. Option 1, taking both boxes, dominates option 2, taking only box 2. What does dominance mean?

The Definition of Dominance is as follows:

A strategy 1 dominates a strategy 2 if every outcome in 1 is at least as good as as the corresponding outcome in 2. And at least one outcome in 1 is strictly better than the corresponding outcome in 2. Further, a player should never play a dominated strategy.

Where does that leave us? The Second argument says that is just doesn’t make sense to choose option 2. But, the first argument says that it just doesn’t make sense to choose option 1.

The problem is that generally speaking, these two Game Theoretic arguments should agree. But, here they don’t. Why?

The reason is that Newcomb’s problem isn’t fundamentally a problem to be solved with Game Theory, but rather it’s a philosophical problem without a solution. A paradox. The choice one picks is contingent on ones underlying belief in how God himself works.

If you think of God as some kind of analogue to a Political pundant, who goes around trying to predict (guess) the outcome of elections, and just happens to be good at it (90% good), your best bet is the dominance principle. That’s because what’s in the box has nothing to do with what you do now. There is little reason to believe that God is actually “seeing” the future and making his decision based on what you do, but rather what he thinks you’ll do based on some kind of gut instinct, not metaphysical-magic-mumbo-jumbo.

On the other hand, if you attribute to God the metaphysical ability to actually see the future (with 90% accuracy), then he’ll KNOW what you are choosing now, and what is in the box will be affected by that. In this case, you are better off going with an expected value argument. God sees you, and the future, for real.

Robert Nozick commented:

I have put this problem to a large number of people, both friends and students in class. To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.

Given two such compelling opposing arguments, it will not do to rest content with one’s belief that one knows what to do. Nor will it do to just repeat one of the arguments, loudly and slowly. One must also disarm the opposing argument, explain away its force while showing it due respect.

And Philip D. Straffin remarked:

The stronger your belief that your will is free, the more likely you might be to resist the idea that a being could predict your choice, and hence the more likely you might take both boxes.

In conclusion, the key reason Newcomb’s problem is indeed a dilemma is that it hinges on the players beliefs about a potentially supreme being. Just how supreme is God? And what does that imply about our potential for possessing Free Will? Does free will exist at all? Does God? For that matter, does there need to be a God for us to still lack free will?

Nozick recommended taking both boxes. Newcomb himself recommended taking only box #2. As for myself, I’d take both boxes. I’m not sure I believe in Free Will, but I certainly don’t believe in God. But, I’ve been known to be wrong.

-Saij

5 Responses

1. In argument 1, wouldn’t you get the \$1000 in box 1 and \$1,000,000 in box 2 if God was wrong in his prediction 10% of the time? The math in the expected value calculation looks incorrect (IMHO) although the answer is correct. I think the decision tree for making option 1 is
(0.1 x 1,001,000) + (0.9 x 1,000) = 101,000.

2. Oh, shit. Ya, thanks Rob for pointing that out 🙂 I’ll make the change. Clearly my mind-to-blog connection wasn’t working at full speed on that one!

3. I knew it was a nit-picky comment, but I thought you would think accuracy matters. I really enjoy this blog.

4. What is math if it’s not nit-picky? It’s a field that greatly rewards the obsessive compulsive.

5. So what is the difference between “actually” seeing and predicting again? I would assume it is that the actual corresponds exactly to what will happen, in which case in the actual event you will do 10% of one action and 90% of the other! How very Everett.

Here’s my explanation of the problem:
“Dominating Strategy” methods and “Likelihood” methods are just two ways to work out what you should do, another method is “best disaster” (minimax) and another is “least envy” (max-min).
People do not agree because they have started with one method and do not have the meta-language to expose their different models of decision making, such as is created by decision theory.

So the disagreement exists within the bounds of game theory, rather than in the philosophical vagaries of free will and prediction. On the latter, I suggest you look at Checkland’s definition of the human activity system: Essentially will is free because it is ours, and any predictive model of us that we can understand becomes a part of us, and so invalidates itself.
The only special case is when we are given a predictive model that fits with what we want, essentially giving us a path to everything we want. In such a case our will would be completely defined, but we wouldn’t mind!
Another variant is that a perfect predictive model that is not revealed to us means we are only free from our perspective, but as we only ever look through our own eyes that doesn’t really matter! It only has relevance to our judgements of people capable of holding such predictive power, not of our own life.

But back to meta-language, and why the estimates do not agree: The dominance criterion ignores the probabilities of the two events, or perhaps gets them backwards:
How do you define “corresponding”? Conventionally when dealing with an agent you don’t know which one they will choose, and accuracy is an additional complexity, but in this case you know what God will choose, subject to his accuracy. So perhaps the more accurate set of alternatives to play against is “God is right” and “God is wrong”. In this case the two are in agreement.
My definition of dominance is more strict, and takes into account you are up against two varying unknowns, and states that the dominant strategy is always at least better for every possible world state, full-stop. This includes variations due to choice, chance or anything else not directly under your control.

I would be curious what distributions of the students and friends went for the two methods.