Devlin's Angle

November 2005

Common confusions II

Last month in this column I discussed some of the confusions about mathematical issues that occasion people to contact me. By far the most common topic concerns probability. Probability calculations cause so many problems for so many people that there is little chance that I can do more than scratch the surface in this column, so I'm going to zero in on just one issue. It is, however, the issue that, over many years of corresponding with people who have written to me, I have come to believe is the root of the majority of the problems: What exactly does a numerical probability tell us?

For the kinds of example that teachers and professors typically use to introduce students to probability theory, the issue seems clear cut. If you toss a fair coin, they say, the probability that it will come down heads is 0.5 (or 50%). [Actually, as the mathematician Persi Diaconis demonstrated not long ago, it's not exactly 0.5; the physical constraints of tossing an actual coin result in roughly a 0.51 probability that it will land the same way up as it starts. But I'll ignore that wrinkle for the purposes of this explanation.] What this means, they go on to say, is that, if you tossed the coin, say, 100 times, then roughly 50 times it would come down heads; if you tossed it 1000 times, it would come down heads roughly 500 times; 10,000 tosses and heads would result roughly 5,000 times; and so forth. The actual numbers may vary each time you repeat the entire process, but in the long run you will find that roughly half the time the coin will land on heads. This can be expressed by saying that the probability of getting heads is 1/2, or 0.5.

Similarly, if you roll a fair die repeatedly, you will discover that it lands on 3 roughly 1/6 of the time, so the probability of rolling a 3 is 1/6.

In general, if an action A is performed repeatedly, the probability of getting the outcome E is calculated by taking the number of different way E can arise and dividing by the total number of different outcomes that can arise from A. Thus, the probability that rolling a fair die will result in getting an even number is given by calculating the number of ways you can get an even number (namely 3, since each of 2, 4, and 6 is a possible even number outcome) and dividing by the total number of possible outcomes (namely 6, since each of 1, 2, 3, 4, 5, 6 is a possible outcome). The answer, then, is 3 divided by 6, or 0.5.

Notice that the probability is assigned to a single event, not the repetition of the action. In the case of rolling a die, the probability of 0.5 that the outcome will be even is a feature of the action of rolling the die (once). It tells you something about how that single action is likely to turn out. Nevertheless, it derives from the behavior that will arise over many repetitions, and it is only by repeating the action many times that you are likely to observe the pattern of outcomes that the probability figure captures.

Probability is, then, an empirical notion. You can test it by experiment. At least, the kind of probability you get by looking at coin tossing, dice rolling, and similar activities is an empirical notion. What causes confusion for many people is that mathematicians were not content to leave the matter of trying to quantify outcomes in the realm of games of chance.

Consider the following scenario. Suppose you come to know that I have a daughter who works at Google; perhaps you meet her. I then tell you that I have two children. This is all you know about my family. What do you judge to be the likelihood (dare I say, the probability?) that I have two daughters? (For the purposes of this example, we'll assume that boys and girls are born with exactly 50% likelihood.)

If you are like many people, you will argue as follows. "I know Devlin has one daughter. His other child is as likely to be a boy as a girl. Therefore the probability that he has two daughters is 1/2 (i.e., 0.5, or 50%)."

That reasoning is fallacious. If you reason correctly, the probability to assign to my having two daughters is 1/3. Here is the valid reasoning. In order of birth, the gender of my children could be B-B, B-G, G-B, G-G. Since you know that one of my children is a girl, you know that the first possibility listed here does not arise. That is, you know that the gender of my children in order of birth is one of B-G, G-B, G-G. Of these three possibilities, in two of them I have one child of each gender, and in only one do I have two daughters. So your assessment of the likelihood of my having two daughters is 1 out of 3.

But even if you figure it out correctly, what exactly is the significance of that 1/3 figure? As a matter of fact, I do have two children and one of my children is a daughter who works at Google. Does anyone believe that I live in some strange quantum-indeterminate world in which my other child is 1/3 daughter and 2/3 son? Surely not. Rather, that 1/3 probability is a measure of your knowledge of my family.

As it happens, I have two daughters. So, if you asked me what probability I would assign to my having two daughters, I would say probability 1.

Does this mean that different people can rationally assign different probabilities to the same event? Not in the case of things like coin tossing or dice rolling. But that's not what's going on here. The probability I asked you to calculate a moment ago was a quantitative measure not of my family but of your knowledge of my family. And there is no reason why the measure you should ascribe to your knowledge is the same as I ascribe to mine. We know different things.

In my experience, it's when probabilities are attached to information that most people run into problems.

The concept of probability you get from looking at coin tossing, dice rolling, and so forth is generally referred to as "frequentist probability". It applies when there is an action, having a fixed number of possible outcomes, that can be repeated indefinitely. It is an empirical notion, that you can check by carrying out experiments.

The numerical measure people assign to their knowledge of some event is often referred to as "subjective probability". It quantifies your knowledge of the event, not the event itself. Different people can assign different probabilities to their individual knowledge of the same event. The probability you assign to an event depends on your prior knowledge of the event, and can change when you acquire new information about it.

Having made the distinction, however, I should point out that it is not as clear cut as might first appear. Sometimes a subjective probability is more psychological than mathematical, such as when someone says "I'm 99% certain I turned the gas off before I left." At the other end of the spectrum, any frequentist probability can be viewed as a subjective probability. For instance, the probability of 1/2 that I assign to the possibility of getting a head when I toss a fair coin ten minutes from now is, when thought of as a measure of my current knowledge about a future event, a subjective probability according to the definition just given. (Clearly, when we quantify our information about a future occurrence of a repeatable action, where the frequentist notion of probability applies, we should assign the frequentist value.)

I am sure (90.27% sure, to be precise!) that a confusion between the frequentist and subjective notions of probability is what lies behind the problem many people having in understanding the reasoning of the notorious Monty Hall problem that I discussed in this column a couple of years ago ( http://www.maa.org/devlin/devlin_07_03.html). That problem is posed to appear to be about a physical situation (where a prize is hidden) but in fact it is not; it's about your individual knowledge of that situation, and how that knowledge changes as you receive additional information.

In fact, there is an entire branch of probability theory devoted to the way probabilities may be updated as new information arises: Bayesian inference. I discussed Bayesian theory in this column back in 2000: www.maa.org/devlin/devlin_2_00.html.

Although probability theory arose in studies of outcomes at the gaming tables of sixteenth and seventeenth century Europe, and despite the fact that scenarios such as tossing coins or rolling dice provide simple, easily understandable introductory examples, there are today so many important applications of Bayesian inference, that I have come round to the belief that those of us in the math ed business would better serve our students if we introduced probability from the very start as a measure of our knowledge of things that happen in the world, not a measure of the world itself. The outcomes of gambling games and state lotteries would then be just one special category where the probabilities we ascribe to our knowledge may be computed with total precision.

Finally, let me end with a fascinating idea put forward by the great Italian mathematician Bruno de Finetti (1906 -1985) to add numerical precision to even highly subjective probability assessments. I'll use de Finetti's idea to examine my earlier example of the person who says they are "99% certain" they turned off the gas. It's possible to replace that vague "99% certain" figure by a more meaningful certainty measure by asking the individual who makes the claim to play a "de Finetti game."

Let's suppose you are the person who makes the claim. I now offer you a deal. I present you with a jar containing 100 balls, 99 of them red, 1 black. You have a choice. Either you draw one ball from the jar, and if it's red, you win $1m. Or we can go back and check if the gas is on, and if it is not, I give you $1m.

Now, if your "99% certain" claim were an accurate assessment of your confidence, it would not make any difference whether you choose to pick a ball from the jar or go back with me and check the status of the gas stove. But I suspect that, when it comes to the crunch, you will elect to pick a ball from the jar. After all, there is only 1 chance in 100 that you will fail to win $1m. You'd be crazy not to go for it.

By electing to pick a ball, you have demonstrated that, what I will call your rational confidence that you have turned off the gas, is at most 99%.

Now I offer you a jar that contains 95 red balls and 5 black, with choosing a red ball again netting you $1m. Assuming you again choose to select a ball rather than go and check out the gas, we may conclude that your rational confidence that you have turned off the gas is at most 95%. If it were really more than that, you should decline the ball-picking offer and go with me to check out the gas at your home. (So much for your "99%" claim!)

Then I offer a jar with 90 red balls and 10 black. If you choose to pick a ball this time, your rational confidence that you have turned off the gas can be at most 90%.

And so on.

Eventually, you decide you would prefer to check the gas to selecting a ball from the jar. If that happens when there are N red balls in the jar, then your rational confidence is precisely N%. The de Finetti procedure has established an exact correspondence between your subjective probability and a frequentist probability.

Neat, eh?


Devlin's Angle is updated at the beginning of each month.
Mathematician Keith Devlin (email: [email protected]) is the Executive Director of the Center for the Study of Language and Information at Stanford University and The Math Guy on NPR's Weekend Edition. Devlin's newest book, THE MATH INSTINCT: Why You're a Mathematical Genius (along with Lobsters, Birds, Cats, and Dogs) was published recently by Thunder's Mouth Press.