After writing the primer on conditional probabilities and their converse vis-a-vis screening tests, I continued to scour the Web to gather more information on the base rates of conception/pregnancy. But, much to my chagrin, as with searching for scientific studies on the reliability of home pregnancy tests (HPT) [note 1] I have come up nearly empty-handed. Thus far I've managed to find only one resource (which unfortunately doesn't provide any citation or rationale for the 50% figure provided; it sounds quite plausible, notwithstanding).
The probability of being pregnant increases as various indicators come into play. For instance given coitus within the last mentrual cycle and given the current scheduled menses has been overdue for several days, the probability that one is pregnant increases to around 50%. For our hypothetical HPT in the hands of the "average" user (for whom sensitivity = specificity = 75%) given P(pregnant) = 0.50, the reliability of the HPT is: P(pregnant | test positive) = 75% and P(not pregnant | test negative) = 75%. In this case, the true positive and true negative ratings are equal to the test's sensitivity and specificity only because the base rate = 50% and sensitivity = specificity.
A 75% true positive rating wouldn't warrant declaring HPTs as being spot on. If the prior probability (base rate) of 50% for pregnancy is held constant, the sensitivity and particularly specificity have to be >90% to attain a true positive rating close to 100%. (Use the spreadsheet and experiment with different combinations of base rate, sensitivity, and specificity. Instructions for copying the table to your spreadsheet are in Testing Screening Tests.)
It's claimed that HPTs are accurate, that "if the test result is positive, you're almost certainly pregnant" although "negative results are less reliable." However, as Bastian et al. discovered, even if the laboratory-determined sensitivity and specificity of the HPT kit is high, once the kit is in the hands of consumers, the actual sensitivity drops because most users fail to follow the instructions to the letter. User error detracts from the reliability of the test.
HPTs rely on the detection of the level of human chorionic gonadotrophin (HCG) in urine, HCG being a chemical whose amount increases over time after conception but is present in insufficient quantities during the first week or so after conception for the HPT to present a positive finding. Thus most HPT kits are to be used after the upcoming menstrual period is missed. If a user tests herself within a week of unprotected intercourse, chances are there still won't be enough HCG for the HPT to detect, assuming conception had taken place. Such misuse lowers the specificity of the test.
Moreover, as we've seen, what we're interested in discovering are the probabilities of the converse, i.e., not the sensitivity and specificity but their inverse--P(pregnant | test positive) and P(not pregnant | test negative). To obtain those figures we need to take the base rate of pregnancy into consideration. And so even with a sensitivity and specificity >90%, a true positive close to 100% is only possible if the prior probability of pregnancy is high enough (>50%). To fail to factor in P(pregnancy) is to commit the base rate fallacy.
According to studies even doctors can misunderstand the nature of sensitivity and specificity and fail to see the need to compute for the probability of the converse.
[C]onditional statements are often confused with their converses. When they evaluate medical research, physicians routinely deal with statistics of the sensitivity and specificity of laboratory test results. In a 1978 study reported in the New England Journal of Medicine, it became apparent that physicians often misunderstand the results of these tests" [note 2].In a study by David Eddy, after being given the values for base rate, sensitivity, and the complement of specificity (i.e., 1 - specificity), 95% of doctors surveyed overestimated P(malignancy | test positive) by one order of magnitude. "David Eddy reported that it's no wonder physicians confused these conditional probabilities; the authors of the medical research often made the error themselves in reporting their results" [note 3].
To end, let's look at conditional probabilities using "if / then" notation [note 4]. If a patient has cancer, then the probability of testing positive is 85%. That's sensitivity. If a patient does not have cancer, then the probability of testing negative is 90%. That's specificity. Now we want to know the converse: if an individual tests positive, then the probability of him/her having cancer is ___. And if a person tests negative, then the probability of him/her not having cancer is ___. They're both blank because we need the prevalence rate (base rate) of the particular type of cancer to determine those values. And the lower the prevalence of the disease 1. the higher the false positive rating of the screening test and 2.the lower its false negative rating will be.
In general, given the statement "if A then B," it doesn't imply the converse is necessarily true, that "if B then A." Thus, given "if I drink and drive then I will figure in a mishap," it would be erroneous to conclude that "if I'm in an accident then I was driving drunk." I could be in mishap even if I haven't had a drink and even if I'm the passenger.
1. To find all studies with "home pregnancy test" in their titles, I typed in "home pregnancy test[Title]" (without the quotes) in the search engine box on the PubMed site . Only 13 studies (mostly irrelevant) came up.
2. Bennett, Deborah J.. Logic Made Easy: How to Know When Language Deceives You. New York: W.W. Norton & Co., 2004. p.109. The NEJM article being referred to is "Interpretation by Physicians of Clinical Laboratory Results" by Ward Casscells, Arno Schoenberger, and Thomas B. Graboys.
3. Bennett, p.110.
4. Bennett, p.109.