I am going to present a well known paradox (I didn't know it until last week, but the source I read said it was well known) and ask your opinion in this post, and reveal my thoughts in my next post.

I don't want you to go to the web and find out about it, I want your natural thoughts. Of course I can't stop you, but note that I did not give the name of the paradox.

Here it is:

I offer you the following bet:

I will flip a coin.

If HEADS you get 1 dollar and we end there.

If TAILS I flip again

If HEADS you get 2 dollars and we end there.

If TAILS I flip again

If HEADS you get 4 dollars and we end there.

If TAILS I flip again

etc.

1) Expected value:

Prob of getting 1 dollar is 1/2

Prob of getting 2 dollars is 1/2^2

Prob of getting 2^2 dollars is 1/2^3

etc

Hence the Expected Value is

1/2 + 1/2 + 1/2 + ... = INFINITY

QUESTION: Would you pay $1000 to play the game?

Leave your answer in the comments and you may say whatever you want as well,

but I request you don't give the name of the paradox if you know it.

No. The probability of regaining my $1000 is about 1/2^10, which is too low for the risk-averse gambler. I have a high probability (~1) of losing money and a very low probability (<1/2^10) of winning money (albeit a potentially large amount of money).

ReplyDeleteNo, I would not take this bet. $1,000 is over 2^9, and flipping ten heads in a row is 1/2^10, so this is a losing proposition.

ReplyDeleteWhat does ten heads in a row have to do with it? The game ends on the first head.

DeleteGiven my highly non linear reward function and scarcity of money, no. Now if I had enough money? Yes. How much would I need? Probably about 100k, ball park. But I would need to more closely model my reward function first.

ReplyDeleteThis should not only depend on your scarcity of money, but also on how useful large sums are for you. Suppose, I don't care whether I have 1 billion dollars or more, so my utility function is capped at 2^30, and further suppose that up to that point it grows at most linearly. Then my expected gain from a game is at most 30/2=15.

Deleteps. What a nice timing to post this question before this year's IMO!

The expected value is only so high because of extremely unlikely outcomes with extremely high payouts. But nobody in the world can actually pay me 2^100 dollars (or whatever) so those extremely high payouts are in some sense not "real."

ReplyDeleteHere's an example. Suppose I play against Jeff Bezos. He can afford to pay me up to about 2^38 dollars, but no more than that. So my reward is really capped at 2^38, no matter how many tails I get in a row. And if I calculate the expected value of the game with the reward capped by 2^38 then the expected value is less than $40. So it's definitely not worth paying $1000 to play the game.

The paper Evaluating gambles using dynamics by Ole Peters and Murray Gell-Mann contains the following paragraph: "Gambles are often treated in economics as so-called one-shot games, ... The one-shot setup seems ill-conceived to us, and the methods we propose produce little insight into the situations it may represent. It is ill-conceived because any gamble affects what we may be able to do after the gamble. If we lose our house, we cannot bet the house again. ... One situation that may be represented by a one-shot game is a bet on a coin toss after which the player (who does not believe in an afterlife) will drop dead. Our methods are not developed for such a-typical cases."

ReplyDeleteThe methods proposed in that paper give a resolution of the paradox that does not need to invoke "my highly non linear reward function". It is enough to work out the actual consequences of "my scarcity of money" (hinted at by Unknown's comment above) to see why you should not take this bet.

Murray Gell-Mann was a famous physicist, and some people wondered why he agreed to appear as coauthor of the above paper. That paragraph ridiculing one-shot games had a strong impact on me, allowing me to better understand the following paradox: "But what I had in mind was more related to a paradox in interpretation of probability than to an attack on using real numbers to describe reality. The paradox is how mathematics forces us to give precise values for probabilities, even for events which cannot be repeated arbitrarily often (not even in principle)."

My impression is that Pascal's Wager can be misused as a one-shot game to nicely illustrate the paradox: "Even if not meant that way, Pascal's Wager presents a similar type of paradox. It describes an experiment which can not be repeated, and then assumes that one could assign a probability like 0.000001 or 1e-3000 to a certain outcome, without questioning whether such an accurate probability even makes sense in this context."

Aside from the low probability of winning, the expected value calculation is incorrect. Summing up all of the payouts weighted by their probability only gives you the expected return, but not when that return will be realised. But money invested now in another way would likely grow over time, so you need to discount the payout by an amount that depends on how much time has passed before you receive the payout in order to compare it the the $1000 you have at the start of the bet. This means that the expected payout should be sum_{n=1}^infinity exp(-r*n)*(1/2). Where r is some small non-zero constant depending on interest rate and the time it takes to flip a coin. This sum converges to a finite value: (1/2)/(e^r -1). So if they flip a coin too slowly, then you definitely shouldn't take the bet.

ReplyDeleteAside from the low probability of winning, the expected value calculation is incorrect. Summing up all of the payouts weighted by their probability only gives you the expected return, but not when that return will be realised. But money invested now in another way would likely grow over time, so you need to discount the payout by an amount that depends on how much time has passed before you receive the payout in order to compare it the the $1000 you have at the start of the bet. This means that the expected payout should be sum_{n=1}^infinity exp(-r*n)*(1/2). Where r is some small non-zero constant depending on interest rate and the time it takes to flip a coin. This sum converges to a finite value: (1/2)/(e^r -1). So if they flip a coin too slowly, then you definitely shouldn't take the bet.

ReplyDeleteSuppose we can flip the coin once a minute and keep it up for a year. Would you take the bet?

DeleteI'm not saying that this is the deciding factor, I'm just pointing out that for any sort of financial calculation you need to discount future gains based on when they will be realised.

DeleteBut no, of course I wouldn't play this game, at least not as a one-off stand-alone game. It's a game where you expect to lose money (since most paths lead to you losing money), even if there is an expected positive return. There's not really much motivation for me to do that in my current circumstances.

DeleteNo. Try to compute how often you would have to repeat this game such that the probability that you won money is bigger than the probability that you lost money. And now compute how much money you need before starting the game, such that you would be able to repeat the game sufficiently often.

ReplyDeleteHere is some code that (if correct) simulates repeated runs of the game. In particular, I simulated starting with $100,000,000 and repeatedly taking the bet until I had no money or at least $100,000,001. Out of 500 trials, I made at least $1 only twice.

ReplyDeleteIt's fun to run the code. (Unfortunately, it lost the indents, but those are easy to put back in.)

import numpy as np

def run_one():

if not np.random.randint(2):

return 0

counter = 1

while np.random.randint(2):

counter *=2

return counter

def run_many(money, target):

while money < target:

money += run_one() -1000

if money <= 0:

return 0

return 1

def run_experiment(start, target, trials):

y = np.zeros(trials)

for i in range(trials):

y[i] = run_many(start,target)

if i < 10 or i % 10 == 0:

print("i equals ", i, " and fraction is ", np.average(y[:i+1]))

return(np.average(y))

start = 100000000

target = 100000001

trials = 500

print(run_experiment(start, target, trials))

Nonsense. The expected value computation is wrong. Each term in the sum of .5s must be weighted by the probability that the game did not stop at earlier steps. To steal a term from physics, this is so bad it is not even wrong!

ReplyDeleteNumber the rounds starting with 1. The probability of winning on round n is the probability of getting n - 1 tails followed by one head, i.e., 1 / 2^n. If you win on round n, you win 2^(n-1) dollars. 2^(n-1) * (1 / 2^n) = 1/2. Your expected value is the sum of the expected values on each round, i.e., 1/2 + 1/2 + ...

DeleteReaders who are familiar with the doubling cube in backgammon may find it interesting that a similar paradox can arise if there is no limit on how high the cube can go. See my website for more discussion. http://alum.mit.edu/www/tchow/cg/undefined.html

ReplyDeleteNo, I would not pay $1000 to play the game (I just tried it once with a real coin and lost $998) ... but I'm waiting for more insights ...

ReplyDelete