People make errors, so any evidence has to be evaluated as to the likelihood of it being reliable. How well are we able to make such an evaluation?
The answer is that when it comes to making sense of probabilistic data, we often perform very poorly. Hundreds of thousands of years of evolution has equipped us with many useful mental abilities -- our instinct to avoid many dangerous situations and our use of language are two obvious examples. However, evolution has not equipped us to handle statistical or probabilistic data -- a very recent component of our lives. Where quantitative data is concerned, if we want to reach wise decisions, it is often safer to rely on mathematics. When we do so, we sometimes find that our intuitions are wildly misleading.
This was illustrated dramatically by the following example proposed by the psychologists Amos Tversky and Daniel Kahneman in the early 1970s. (I considered this example briefly in my July, 1996 column.)
A certain town has two taxi companies, Blue Cabs and Black Cabs. Blue Cabs has 15 taxis, Black Cabs has 85. Late one night, there is a hit-and-run accident involving a taxi. All of the town's 100 taxis were on the streets at the time of the accident. A witness sees the accident and claims that a blue taxi was involved. At the request of the police, the witness undergoes a vision test under conditions similar to the those on the night in question. Presented repeatedly with a blue taxi and a black taxi, in random order, he shows he can successfully identify the color of the taxi 4 times out of 5. (The remaining 1/5 of the time, he misidentifies a blue taxi as black or a black taxi as blue.) If you were investigating the case, which company would you think is most likely to have been involved in the accident?
Faced with eye-witness evidence from a witness who has demonstrated that he is right 4 times out of 5, you might be inclined to think it was a blue taxi that the witness saw. You might even think that the odds in favor of it being a blue taxi were exactly 4 out of 5 (i.e., a probability of 0.8), those being the odds in favor of the witness being correct on any one occasion.
However, the facts are quite different. Based on the data supplied, the probability that the accident was caused by a blue taxi is only 0.41. That's right, the probability is less than half. It was more likely to have been a black taxi.
How do you arrive at such a figure? The mathematics you need was developed by an eighteenth century English minister called Thomas Bayes.
What human intuition often ignores, but what Bayes' rule takes proper account of, is the 0.85 probability (85 out of a total of 100) that any taxi in the town is likely to be black.
Without the testimony of the witness, the probability that it had been a black taxi would have been 0.85, the proportion of taxis in the town that are black. So, before the witness testifies to the color, the probability that the taxi in question was blue is low, namely 0.15. This is what is called the prior probability or the base rate, the probability based purely on the way things are, not the particular evidence pertaining to the case in question.
Specifically, Bayes' method shows you to calculate the probability of a certain event E (in the above example, a blue taxi being involved), based on evidence (in our case, the testimony of the eyewitness), when you know:
(1) the probability of E in the absence of any evidence;
(2) the evidence for E;
(3) the reliability of the evidence (i.e., the probability that the evidence is correct).
All three pieces of information are highly relevant, and to evaluate the true probability you have to combine them in the right manner. Bayes' method tells you how to do this. It tells us that the correct probability is given by the following calculation (where P(E) denotes the probability of event E occurring):
Compute the product
P(blue taxi) x P(witness is right),
and divide the answer by the sum
[P(blue taxi) x P(witness is right) + P(black taxi) x P(witness is wrong)].
Putting in the various figures, this becomes the product 0.15 x 0.8 divided by the sum [0.15 x 0.8 + 0.85 x 0.2], which works out to be 0.12/[0.12 + 0.17] = 0.12/0.29 = 0.41.
How exactly is the above formula derived? I'll try to explain it for the given example, but you should be warned that it takes a very clear head to follow the argument. The principal lesson to be learned from Bayes' rule is that computing probabilities based on less-than-perfect evidence can be done, but is not at all easy.
The witness claims the taxi he saw was blue. He is right 8/10 of the time. Hypothetically, if he were to try to identify each taxi in turn, under the same circumstances, how many would he identify as being blue?
For the 15 blue taxis, he would (correctly) identify 80% of them as being blue, namely 12. (In this hypothetical argument, we are assuming that the actual numbers of taxis accurately reflect the probabilities.)
For the 85 black taxis, he would (incorrectly) identify 20% of them as being blue, namely 17.
So, in all, he would identify 29 of the taxis as being blue.
Thus, on the basis of the witness's evidence, we find ourselves looking at a group of 29 taxis.
Of the 29 taxis we are looking at, 12 are in point of fact blue.
Consequently, the probability of the taxi in question being blue, given the witness's testimony, is 12/29, i.e. 0.41.
So much for the reliability of eyewitness evidence. If our intuitions can be so wildly misleading in the case of highly simplified examples, where all the figures we need are presented to us in a clean, neat fashion, what hope do we have in the far more messy real world that juries frequently have to deal with?
Fortunately, you can almost certainly regard this worrying question as purely theoretical, secure in the knowledge that you are unlikely to find yourself on a jury having to make such a difficult call. It has long been recognized that attorneys almost always object to the inclusion of any mathematician on a jury. After all, the last thing they want is a jury that tries to "complicate" the evidence of their star witness with questions about prior probabilities.
- Keith Devlin