Yesterday, at Waterloo Ignorance Day, one speaker mentioned the Ellsberg paradox, which I hadn't heard of before. Believe it or not, it is named for Daniel Ellsberg, who would later become famous for releasing the Pentagon Papers.
Here it is: you have an urn with 90 well-mixed balls. There are R red balls, Y yellow balls, and B black balls. You know only the following information: R = 30, and Y+B = 60. You now get to choose between
Gamble A: win $100 if you draw a red ball vs.
Gamble B: win $100 if you draw a black ball.
You are also given a choice between
Gamble C: win $100 if you draw a red or yellow ball.
Gamble D: win $100 if you draw a black or yellow ball.
Which choices do you prefer? A over B or B over A? And C over D or D over C?
Wednesday, December 07, 2011
The Ellsberg Paradox
Posted by Jeffrey Shallit at 10:29 AM
Labels: decision theory, Ellsberg paradox, probability
Subscribe to: Post Comments (Atom)
There is no way to choose between A and B. The chances of payoff in B depend on the relative proportion of yellow vs. black, which you have no information on -- it could be better or worse than A. Lacking other information, I intuitively assume B follows a uniform probability distribution [0...60], which has an expected value of 30, which gives the same odds as A.
There is also no way to choose between C and D. The payoff chance with D is 2/3. The payoff chance with C is 1/3+Y/60. Making the same assumption about Y as we did for B in the previous case, we act is if Y=30, which gives the same odds for D as for C.
Yet most people prefer A to B and D to C.
Do we know that there is at least one yellow ball and one black ball, or is it possible there are zero of one of those?
If there's at least one, then the best answers [spoiler alert] seem to be B and C. You've got a 33% chance of drawing a red ball. If there are anywhere from 30 to 59 black balls in there, you have a 33% chance *or better* of drawing a black ball. This represents a range of 30 numbers (from 30-59). Your chances of drawing a black ball would be worse than 33% if there were anywhere from 1-29 of them in the urn - this represents a range of 29 numbers. So there are more potential urns where your chances of drawing a black ball are 33% *or greater* than there are urns where your chances are less than 33%. So it's better to bet $100 on drawing a black ball.
Similar logic seems to apply for the C/D choice.
...But I suck at probability, so this is probably (hah!) entirely wrong.
Y or B could be 0.
So, the paradox is the preference for known probabilities over unknown probabilities? Becasue AFAICT, A and D have know probabilities, B and C are unknown.
If so, it's not much of a paradox.
I think the paradox is interesting because it combines two different interpretations of probability - even if monobrows don't.
I think this one may be linked to mistrust (and possibly game theory).
People could be expecting that the one proposing the bet will set it up so that he minimizes hiss loss, i.e that the probability distribution of B is nonuniform, biased towards lower values for the first choice, towards higher values for the second.
I agree on the "A to B" comparison - there's no way to chose. However, I would prefer D to C for the simple reason that it gives a guaranteed rate of return. If the gamble is repeated, D becomes a "sure thing", while C is a gamble whose odds you won't know until you see the result.
Yet most people prefer A to B and D to C.
Most people never took elementary Probs & Stats ;-) (or never took it to heart, anyway)
My head would calculate the expected value and after finding no difference, leave it up to my gut to decide: so option A and D - preferring certainty over uncertainty.
related article on http://experimentalmath.info/blog/2011/12/innumeracy-and-public-risk/ Innumeracy and public risk i.e. few people are good at evaluating/assessing risk.
It seems like you're saying that the following two are exactly equal and you would not prefer one over the other:
(a) $100 million
(b) A fair coin is flipped. If heads, you get $0. If tails, you get $200 million
Most people would prefer (a) to (b).
It's not quite that simple, because in none of the alternatives of the paradox are you guaranteed to win anything.
(I am a different "anonymous")
ISTM that the choices between $0, $100 million, and $200 million are different from choices between (let's say) $0, $1, and $2. My life would be no different if I had $200 million rather than "only" $100 million, but it would be different from $0. I wouldn't feel the same way about $2, $1, and $0.
I suppose that I would phrase it like this: The subjective value of those different payoffs is different from the number of dollars. For small amounts of money, the subjective is very close to the objective, but for large amounts of money,
the subjective value levels off.
What are the two different interpretations of probability here? My knowledge of the subject does not extend beyond calculations involving dice and cards.
This kind of problem is beyond me so I asked on the email someone who is a member of Mensa and an accountant but who has taken (higher) mathematics ages ago; so maybe those of you here who had taken higher mathematics in recent times would have an edge.
A= 30/90 or 1/3
B=>1/90 but <60/90 or <2/3 – average is 31/90
- so perhaps B is slightly preferred!
C=<90/90 or <1 but >30/90 or >1/3 – say about 2/3 on average
D=60/90 or 2/3
- so toss a coin!
I believe Amos and Tversky (who are mentioned in the footnotes of the Ellsberg pardox (sic) page), would add a spin on the question -- something like this:
Gamble A: pay $100 if you draw yellow or black ball - vs.
Gamble B: pay $100 if you draw a yellow or red ball
It is the "frequentist" interpretation (where probability is supposed to reflect the outcomes if we perform an infinite number of trials) versus an interpretation that reflects our lack of knowledge of the situation ("Bayesian").
If we bet on a red ball being drawn, since we know there are 30 out of 90, presumably if we do an infinite number of trials we will draw a red ball 1/3 of the time. So the probability is 1/3.
If we bet on a yellow ball being drawn, however, since we don't know how many there are (it could be any number between 0 and 60), if we do an infinite number of trials, we would draw a yellow ball p of the time, where p is some unknown number between 0 and 2/3.
Eamon: the interesting thing is that even if B does not follow a uniform distribution, it still makes no sense (in terms of expected value) to prefer A over B and then D over C, as most do.
Anonymous @ 6:49
The mistake your Mensa friend has made is to assume that Y > 0. This is not necessarily true the way the problem is stated. So the average is not 31/90.
Certainly OneBrow has hit on a plausible reason why people prefer A to B and D to C: the known probability is favored over the unknown. But it's still strange that preferring A to B does suggest that a belief that Y > B, while preferring D to C suggests a belief that Y < B.
What would happen if you replaced it with a pure probability question? There are 61 bags, each filled with 30 red balls and a different combination of yellow and black balls. First pick a bag randomly, then a ball from the bag. In this case, would people make different choices?
I think even if you carefully rule out the deceit explanation, there might still be a preference for A and D. It's just easier to see the probabilities right away. I have to admit that I first started thinking in terms of variance and whether there was any difference, but "obviously" the variance is the same because the payoff is either 0 or 100 with identical probabilities. It's just that the probability of getting the red ball is something I can understand instantly, and I can maintain that awareness in my head. On the other hand, the probability of getting a black ball in the 61 bag case requires a couple of steps that I cannot really maintain awareness of (I could probably think of a way, but it is not as easy).
Imagine you are at a casino, and given the choice between A and B. Since absolutely *nothing* has been said about the probability distribution of Black, and I know the casino never wants to lose money, I would assume B=0, or at least B<30, so it makes sense to always prefer A over B.
This argument doesn't work for the second case, unless you assume the casino is not even choosing Y and B until after you make your bet. If you believe this to be the case, you should select D because if you select C the casino will just take Y=0.
Jeff - you said: "...preferring A to B does suggest that a belief that Y > B, while preferring D to C suggests a belief that Y < B."
I may not be understanding something, but here's how I think about it: we can't tell whether Y > B or Y < B.
In preferring A to B, it is certainly possible that Y > B, therefore, I go with the known probability of A (rather than the unknown probablity of B).
In preferring D to C, I know that (Y + B) > R (i.e., 60 > 30) - it does not matter whether Y > B or Y < B. Together, they are > R.
If there's something I've missed, or a fallacy that I've commited, please let me know - I'm eager to learn. Thanks.
Vlodko: what you missed is in that preferring D to C you are not comparing (B+Y) to R but rather (B+Y) to (R+Y). So this suggests you believe that B > R, and hence B > Y.
"Yet most people prefer A to B and D to C."
So what?! This is not about mathematics but about psychology (the so-called "expected utility hypothesis" which has no particular reason to be taken as granted). At first, most people aren't able to deploy a decent probability estimation even for a rather simple problem like this one, they feel lost in front of uncertainties and pick the one bet they can cope with (that is the first obvious 1/3). Supposing one does make the math correctly, one has still to deal with choosing between two possibilities perfectly equal in terms of utility: there's where the psychological factor gets into play. I suppose the preference is given by the number of unknown parameters: in the first case (A) one knows that one has an actually fixed 1/3 probability to win, while in the second case (B) there is an additional uncertainty on the "actual winning probability" which could range anywhere in between 0 and 2/3 (where by "actual winning probability" one means the probability to win with the balls already introduced in the urn, in a given unknown configuration). The second "uncertainty parameter" is probably perceived as a supplementary risk factor (even if the mathematics says otherwise) one does not want to assume.
Out of that, I fail to see why Ellsberg called this a "paradox": there is really no mathematical incongruence, just a psychological limitation of applicability for an hypothesis.
Post a Comment