When confronted with conditional probability, my advice is that you be completely process-driven; identify what's given, then follow the process and the formulas religiously. After a while, you can start to see it intuitively, but it does take a while. It's all about what you are given, and how you define things.
In my MBA stats class, one of the problems that always stumped the students was a conditional problem:
“Pregnancy tests, like almost all health tests, do not yield results that are 100% accurate. In clinical trials of a blood test for pregnancy, the results shown in the accompanying table were obtained for the Abbot blood test (based on data from "Specificity and Detection Limit of Ten Pregnancy Tests" by Tiitinen and Stenman, in the Scandanavian Journal of Clinical Laboratory Investigation, 53, Supplement 216). The disclaimer in the journal stated that other tests are more reliable that the test with results given in this table.
Positive Result | Negative Result | |
Subject is pregnant | 80 | 5 |
Subject is not pregnant | 3 | 11 |
“1. Based on the results in the table, what is the probability of a woman being pregnant if the test indicates a negative result?
“2. Based on the results in the table, what is the probability of a false positive; that is, what is the probability of getting a positive result if the woman is not actually pregnant?”
Everyone would just try to look at it as though there were no conditions...they would say, 5/80 for question 1, and 3/80 for question 2. The first question, though, is asking "what is the chance of being pregnant, given a negative result?" There were 16 negative results, and of those, 5 were pregant. So the answer is 5/16, or 31.25%. For the second question, it's what is the probability of a positive, given that the woman is not pregant. In this case, there are 14 non-pregnant women, and 3 of those got a positive result. So that's about 21.42%.
These numbers, and this idea, are really important--that is, they carry real-world import. Some statisticians make their living explaining these concepts to juries. People get fired or arrested because of false positives on urinalysis and other tests, because there is a general impression that they are far more reliable than they actually are.
Let’s look at a different example. In the military, people are given random drug screenings. The test is “certified 99% accurate.” I was always told that this means that if you do drugs, and you’re tested, it will catch you 99 percent of the time. We think, “logically,” that this means there is only a one percent false negative rate…that the fact that someone who does drugs doesn’t get caught one percent of the time indicates that one percent false positive rate. Worse, we assume that if the “false negative rate” is only 1 percent, the false positive rate must also be one percent…it’s just common sense, right?
But “common sense” isn’t…it’s neither common nor truly sensical. Look at it this way…suppose we test 100,000 service members. Suppose further that .1% or 1 in a thousand service members actually do drugs. We might get this:
Do Drugs | Don't Do Drugs | |
Test Positive | 99 | 999 |
Test Negative | 1 | 98,901 |
Tables like this are informative, but they don’t tell the whole story. You can see from this that the company is technically correct…at least in this case, of 100 people who did drugs, 99 were caught and 1 was not. But a false positive rate and a false negative rate are made up of more. To get to the whole story, it’s also good to do the marginals, or row and column totals:
Do Drugs | Don't Do Drugs | Total in Row | |
Test Positive | 99 | 999 | 1098 |
Test Negative | 1 | 98,901 | 98,902 |
Totals | 100 | 99,900 | 100,000 |
Numbers like this, the numbers of people tested, are very important. This helps us figure out our givens. The false negative rate is not the number of people who did drugs and tested negative. It’s the number out of all the people who tested negative who actually did drugs. In this case, the false negative rate is much better than advertised…it’s 1/98,902, or .00001, about one in 10,000 who do drugs and get tested get away with it.
The consequences, though, are on the false positive side…this is where people get turned away for employment, get fired, etc. In the case of the military, a lot of people end up in a lot of trouble with the random urinalysis program. While we want to be cautious, and we don’t want a lot of druggies flying or controlling aircraft or tanks or other deadly weapons, we should also be concerned that we might be ruining careers unnecessarily. If we look at the table, the “common sense” interpretation of the false positive rate would be 999/100000, or 0.999 percent, very close to the one percent that we assumed initially. But, as astounding as it may seem, considering the number of people that are convicted each year because of this assumption, this is entirely incorrect!
The actual false positive rate consists of the number of people incorrectly identified as drug users, or the number of non-drug users out of the total number of positives. In this case, that’s 999 out of 1,098, or 90.98%! In other words, your chance of actually being a drug user, given a positive result on this “99% accurate” test, is only 9.02%!
Yes, it’s tricky. No, it’s not easy. But it’s important. It touches lives. Juries, lab technicians, doctors and nurses, lawyers, employers, employees and patients who don’t understand this put either themselves or others in peril every day.