Made the after reading this completely insane thinking and behavior. So whenever you don't like what reality throws at you take some time out for a psychotic break.
original image
Sunday, June 19, 2011
Thursday, June 09, 2011
My very first orb!
Close-up from the same image:
OMG! My car's possezzed! An automobilic ectoplasmic globule has come out of the woodwork (or metalwork). This must be punishment for hacking into the electrical system. I must appease the god Toyotus. Perhaps feeding it ultra high octane gas, giving it a much needed oil change, and pampering it with a whole body wash might do it.
Hmmm my very first orb manifested in the car as well: http://www.skepdic.com/orbs.ht
It was and has been raining lightly all morning and I was at the entrance to the garage so there's a good chance this is a tiny droplet floating by.
In the series of pics I took there were no other orbs so this speck is almost certainly not on the lens itself. Which makes sense since the orb is very bright--implying it was illuminated by the camera's flash
For the curious that's my test circuit for a variable intermittent wiper control I'm working on. There's a breadboarded microcontroller circuit in the transparent plastic box and two relays mounted on a DIN rail at the back of the box. Wires go up to the car's wiper circuit underneath the steering column. I've been studying the wiper's circuitry for several days now but I snipped the wiper's wires just today to bypass the car's electronics and hook up my circuit to it. Love the coincidence.
Finally, as alwayz the Bible has something to say: Beholdest thou the mote that is in thy camera's eye (Matt.7:3)
Friday, May 06, 2011
Pulseebo®
My company is all set to manufacture Pulseebo®, a cardiovascular drug indicated for hypertension, atherosclerosis, hyperlipidemia, among others. It'll be the most inexpensive CV drug around at just $0.10 per tablet. It's Rx so you need to get your doctor to write out a prescription.
Pulseebo® has no known side effects due to the fact that we have completely removed all active ingredients. Remember that active ingredients are responsible for all side effects in drugs. By taking them out of the formulation we have--for the first time in the industry--made a drug that will not produce any side effects. Moreover, this means that overdosing on Pulseebo® is almost impossible. Popping a whole bottle will, at worst, give you an upset stomach. (We're already working on our next drug Pulseebo Minus® which will have all nonactive ingredients removed as well to finally make overdosing an impossibility).
The evidence for the efficacy of Pulseebo® is just overwhelming. Over the past 5 years we've done extensive non-randomized, uncontrolled, unblinded, multicenter studies which which we've published in our in-house journal clearly showing that Pulseebo® is safe and effective. (These studies and other technical literature are available upon request.) More importantly, Pulseebo's efficacy is attested to by those who've used it. Just ask our staff for their personal stories. They've been unbiased and thoroughly objective in their assessment and have absolutely no conflicts of interest.
For more on Pulseebo®, ask your doctor on your next visit.
Pulseebo® -Your heart deserves more
Pulseebo® has no known side effects due to the fact that we have completely removed all active ingredients. Remember that active ingredients are responsible for all side effects in drugs. By taking them out of the formulation we have--for the first time in the industry--made a drug that will not produce any side effects. Moreover, this means that overdosing on Pulseebo® is almost impossible. Popping a whole bottle will, at worst, give you an upset stomach. (We're already working on our next drug Pulseebo Minus® which will have all nonactive ingredients removed as well to finally make overdosing an impossibility).
The evidence for the efficacy of Pulseebo® is just overwhelming. Over the past 5 years we've done extensive non-randomized, uncontrolled, unblinded, multicenter studies which which we've published in our in-house journal clearly showing that Pulseebo® is safe and effective. (These studies and other technical literature are available upon request.) More importantly, Pulseebo's efficacy is attested to by those who've used it. Just ask our staff for their personal stories. They've been unbiased and thoroughly objective in their assessment and have absolutely no conflicts of interest.
For more on Pulseebo®, ask your doctor on your next visit.
Pulseebo® -Your heart deserves more
Thursday, January 27, 2011
When logic goes to the dogs
The dog's argument has an implicit inference which has to be made explicit:
All cats have four legs.That argument can also be stated, equivalently, as follows:
All creatures that have four legs are cats.
I have four legs.
Therefore I am a cat.
If the animal is a cat then it has four legsWell our canine shouldn't be heartbroken at all for discovering she's actually a feline. Instead she should be depressed she's committed the fallacy of illicit conversion.
I'm an animal with four legs.
Therefore, I'm a cat.
The fallacy occurs when given "all p are q" we infer that the converse "all q are p" is also true. Likewise when given "if p then q" and we conclude that "if q then p," then we've committed an illicit conversion. The "if - then" case may be more familiar when stated as follows:
If p then qThat's the well known fallacy of affirming the consequent and is an example of an illicit conversion. (p is known as the antecedent and q the consequent.) An argument in this form is invalid.
q
Therefore p.
The thing to remember is that "all s are r" or "if p then q" do not necessarily imply "all r are s" or "if q then p." The converse is is not necessarily implied but it might be true. For instance, in causal arguments, if p causes q, and if p is both necessary and sufficient to cause q, then the converse is also true:
If a sheet of paper is heated to its combustion temperature and oxygen is present then it will burn.Both statements are true. This is called a biconditional--the conditional (i.e., the if - then statement) is true both ways. If p then q and if q then p. To formalize the relationship between temperature/oxygen and paper we would say:
If paper is burning then it's been raised to its combustion temperature and oxygen is present
Paper will burn if and only if its temperature is raised to its combustion point and oxygen is present.The phrase "if and only if" indicates this is a biconditonal statement.
----
Print References:
Bennett, Deborah J.. Logic made easy: how to know when language deceives you. New York: W.W. Norton & Co., 2004. p. 108-110, 116-117, 130.
Vaughn, Lewis. The power of critical thinking: effective reasoning about ordinary and extraordinary claims. New York: Oxford University Press, 2005. p. 287-289.
Tuesday, January 25, 2011
The prejudice of faith
Faith discriminates. In the sense that a person will choose to believe in certain ideas that are extraordinary and unsupported by evidence and reason, but won't have faith in a host of other similar unenvinced extraordinary ideas. So what then are the criteria for having faith in X but not Y? Why X instead of Y, and why neither? What kinds of claims must we have faith in? What are the characteristics of those claims for which we must forgo reason/evidence, but must believe in fully nonetheless rather than just ignore or be agnostic about?
I think if believers honestly and sincerely answered these questions they'd find that their bases would include, among others, preference, emotion, culture-centric biases/prejudices. If we scratch the surface I think we'll discover instances of special pleading--of singling out particular claims which they hold immune to rationality and rule of evidence and logic, affording them a special pass and privilege they don't grant to almost all other ideas/beliefs and areas of inquiry.
I think if believers honestly and sincerely answered these questions they'd find that their bases would include, among others, preference, emotion, culture-centric biases/prejudices. If we scratch the surface I think we'll discover instances of special pleading--of singling out particular claims which they hold immune to rationality and rule of evidence and logic, affording them a special pass and privilege they don't grant to almost all other ideas/beliefs and areas of inquiry.
Friday, January 21, 2011
More on HPT, and conditionals and their converse
Update (March 2016): A more comprehensive contingency table which includes detailed definitions can be viewed here.
-------------
After writing the primer on conditional probabilities and their converse vis-a-vis screening tests, I continued to scour the Web to gather more information on the base rates of conception/pregnancy. But, much to my chagrin, as with searching for scientific studies on the reliability of home pregnancy tests (HPT) [note 1] I have come up nearly empty-handed. Thus far I've managed to find only one resource (which unfortunately doesn't provide any citation or rationale for the 50% figure provided; it sounds quite plausible, notwithstanding).
The probability of being pregnant increases as various indicators come into play. For instance given coitus within the last mentrual cycle and given the current scheduled menses has been overdue for several days, the probability that one is pregnant increases to around 50%. For our hypothetical HPT in the hands of the "average" user (for whom sensitivity = specificity = 75%) given P(pregnant) = 0.50, the reliability of the HPT is: P(pregnant | test positive) = 75% and P(not pregnant | test negative) = 75%. In this case, the true positive and true negative ratings are equal to the test's sensitivity and specificity only because the base rate = 50% and sensitivity = specificity.
A 75% true positive rating wouldn't warrant declaring HPTs as being spot on. If the prior probability (base rate) of 50% for pregnancy is held constant, the sensitivity and particularly specificity have to be >90% to attain a true positive rating close to 100%. (Use the spreadsheet and experiment with different combinations of base rate, sensitivity, and specificity. Instructions for copying the table to your spreadsheet are in Testing Screening Tests.)
It's claimed that HPTs are accurate, that "if the test result is positive, you're almost certainly pregnant" although "negative results are less reliable." However, as Bastian et al. discovered, even if the laboratory-determined sensitivity and specificity of the HPT kit is high, once the kit is in the hands of consumers, the actual sensitivity drops because most users fail to follow the instructions to the letter. User error detracts from the reliability of the test.
HPTs rely on the detection of the level of human chorionic gonadotrophin (HCG) in urine, HCG being a chemical whose amount increases over time after conception but is present in insufficient quantities during the first week or so after conception for the HPT to present a positive finding. Thus most HPT kits are to be used after the upcoming menstrual period is missed. If a user tests herself within a week of unprotected intercourse, chances are there still won't be enough HCG for the HPT to detect, assuming conception had taken place. Such misuse lowers the specificity of the test.
Moreover, as we've seen, what we're interested in discovering are the probabilities of the converse, i.e., not the sensitivity and specificity but their inverse--P(pregnant | test positive) and P(not pregnant | test negative). To obtain those figures we need to take the base rate of pregnancy into consideration. And so even with a sensitivity and specificity >90%, a true positive close to 100% is only possible if the prior probability of pregnancy is high enough (>50%). To fail to factor in P(pregnancy) is to commit the base rate fallacy.
According to studies even doctors can misunderstand the nature of sensitivity and specificity and fail to see the need to compute for the probability of the converse.
To end, let's look at conditional probabilities using "if / then" notation [note 4]. If a patient has cancer, then the probability of testing positive is 85%. That's sensitivity. If a patient does not have cancer, then the probability of testing negative is 90%. That's specificity. Now we want to know the converse: if an individual tests positive, then the probability of him/her having cancer is ___. And if a person tests negative, then the probability of him/her not having cancer is ___. They're both blank because we need the prevalence rate (base rate) of the particular type of cancer to determine those values. And the lower the prevalence of the disease 1. the higher the false positive rating of the screening test and 2.the lower its false negative rating will be.
In general, given the statement "if A then B," it doesn't imply the converse is necessarily true, that "if B then A." Thus, given "if I drink and drive then I will figure in a mishap," it would be erroneous to conclude that "if I'm in an accident then I was driving drunk." I could be in mishap even if I haven't had a drink and even if I'm the passenger.
----
Notes:
1. To find all studies with "home pregnancy test" in their titles, I typed in "home pregnancy test[Title]" (without the quotes) in the search engine box on the PubMed site . Only 13 studies (mostly irrelevant) came up.
2. Bennett, Deborah J.. Logic Made Easy: How to Know When Language Deceives You. New York: W.W. Norton & Co., 2004. p.109. The NEJM article being referred to is "Interpretation by Physicians of Clinical Laboratory Results" by Ward Casscells, Arno Schoenberger, and Thomas B. Graboys.
3. Bennett, p.110.
4. Bennett, p.109.
-------------
After writing the primer on conditional probabilities and their converse vis-a-vis screening tests, I continued to scour the Web to gather more information on the base rates of conception/pregnancy. But, much to my chagrin, as with searching for scientific studies on the reliability of home pregnancy tests (HPT) [note 1] I have come up nearly empty-handed. Thus far I've managed to find only one resource (which unfortunately doesn't provide any citation or rationale for the 50% figure provided; it sounds quite plausible, notwithstanding).
The probability of being pregnant increases as various indicators come into play. For instance given coitus within the last mentrual cycle and given the current scheduled menses has been overdue for several days, the probability that one is pregnant increases to around 50%. For our hypothetical HPT in the hands of the "average" user (for whom sensitivity = specificity = 75%) given P(pregnant) = 0.50, the reliability of the HPT is: P(pregnant | test positive) = 75% and P(not pregnant | test negative) = 75%. In this case, the true positive and true negative ratings are equal to the test's sensitivity and specificity only because the base rate = 50% and sensitivity = specificity.
A 75% true positive rating wouldn't warrant declaring HPTs as being spot on. If the prior probability (base rate) of 50% for pregnancy is held constant, the sensitivity and particularly specificity have to be >90% to attain a true positive rating close to 100%. (Use the spreadsheet and experiment with different combinations of base rate, sensitivity, and specificity. Instructions for copying the table to your spreadsheet are in Testing Screening Tests.)
It's claimed that HPTs are accurate, that "if the test result is positive, you're almost certainly pregnant" although "negative results are less reliable." However, as Bastian et al. discovered, even if the laboratory-determined sensitivity and specificity of the HPT kit is high, once the kit is in the hands of consumers, the actual sensitivity drops because most users fail to follow the instructions to the letter. User error detracts from the reliability of the test.
HPTs rely on the detection of the level of human chorionic gonadotrophin (HCG) in urine, HCG being a chemical whose amount increases over time after conception but is present in insufficient quantities during the first week or so after conception for the HPT to present a positive finding. Thus most HPT kits are to be used after the upcoming menstrual period is missed. If a user tests herself within a week of unprotected intercourse, chances are there still won't be enough HCG for the HPT to detect, assuming conception had taken place. Such misuse lowers the specificity of the test.
Moreover, as we've seen, what we're interested in discovering are the probabilities of the converse, i.e., not the sensitivity and specificity but their inverse--P(pregnant | test positive) and P(not pregnant | test negative). To obtain those figures we need to take the base rate of pregnancy into consideration. And so even with a sensitivity and specificity >90%, a true positive close to 100% is only possible if the prior probability of pregnancy is high enough (>50%). To fail to factor in P(pregnancy) is to commit the base rate fallacy.
According to studies even doctors can misunderstand the nature of sensitivity and specificity and fail to see the need to compute for the probability of the converse.
[C]onditional statements are often confused with their converses. When they evaluate medical research, physicians routinely deal with statistics of the sensitivity and specificity of laboratory test results. In a 1978 study reported in the New England Journal of Medicine, it became apparent that physicians often misunderstand the results of these tests" [note 2].In a study by David Eddy, after being given the values for base rate, sensitivity, and the complement of specificity (i.e., 1 - specificity), 95% of doctors surveyed overestimated P(malignancy | test positive) by one order of magnitude. "David Eddy reported that it's no wonder physicians confused these conditional probabilities; the authors of the medical research often made the error themselves in reporting their results" [note 3].
To end, let's look at conditional probabilities using "if / then" notation [note 4]. If a patient has cancer, then the probability of testing positive is 85%. That's sensitivity. If a patient does not have cancer, then the probability of testing negative is 90%. That's specificity. Now we want to know the converse: if an individual tests positive, then the probability of him/her having cancer is ___. And if a person tests negative, then the probability of him/her not having cancer is ___. They're both blank because we need the prevalence rate (base rate) of the particular type of cancer to determine those values. And the lower the prevalence of the disease 1. the higher the false positive rating of the screening test and 2.the lower its false negative rating will be.
In general, given the statement "if A then B," it doesn't imply the converse is necessarily true, that "if B then A." Thus, given "if I drink and drive then I will figure in a mishap," it would be erroneous to conclude that "if I'm in an accident then I was driving drunk." I could be in mishap even if I haven't had a drink and even if I'm the passenger.
----
Notes:
1. To find all studies with "home pregnancy test" in their titles, I typed in "home pregnancy test[Title]" (without the quotes) in the search engine box on the PubMed site . Only 13 studies (mostly irrelevant) came up.
2. Bennett, Deborah J.. Logic Made Easy: How to Know When Language Deceives You. New York: W.W. Norton & Co., 2004. p.109. The NEJM article being referred to is "Interpretation by Physicians of Clinical Laboratory Results" by Ward Casscells, Arno Schoenberger, and Thomas B. Graboys.
3. Bennett, p.110.
4. Bennett, p.109.
Thursday, January 20, 2011
Testing screening tests
Update (March 2016): A more comprehensive contingency table which includes detailed definitions can be viewed here.
-------------
How reliable are home pregnancy test (HPT) kits? Given 100 who've just conceived, 75 of them will test positive. And given 100 who aren't pregnant, 75 will correctly test negative [see note 1]. Assume the chances of conception when coitus is performed at some randomly chosen day of the month is 5% [see note 2]. If the HPT result comes out positive should the woman panic and faint if it's an unwanted pregnancy? Or if she's been trying to be with child for years, should she immediately broadcast the news on Facebook, Twitter, and email every friend and kin she has?
Before finding out the answer, let's introduce some important terms. The base rate is the prevalence rate or frequency of a disease, condition or phenomenon in the population. Sensitivity is the rate/frequency/probability a medical screening test will result in a positive finding given that the person being tested has the condition or disease which the test is for. In probability notation sensitivity can be denoted as P(test is positive | person has the condition). That's read as "the probability that the test comes out positive given the person has the condition." The vertical bar is read as "given." Specificity is the rate/frequency/probability a medical screening test will result in a negative finding given that the person being tested does not have the condition or disease which the test is for. In other words, specificity is P(test is negative | person doesn't have the condition).
So for HPT the base rate = 0.05, sensitivity = 75/100 = 0.75, and specificity = 75/100 = 0.75.
What we're interested in finding out is how reliable the test is, ie., how accurate it is when the test comes out positive and when it comes out negative. In essence we want to know P(being pregnant | test is positive) and P(not being pregnant | test is negative). These are two values and they needn't be the same. If the values are close to 1 (e.g. > 0.90) then we can say that HPT is reliable.
Notice that when we're talking of reliability of the test when it results in a positive finding we are looking at P(being pregnant | test is positive). This is not the same as the sensitivity of HPT which is P(test is positive | person is pregnant). The test's sensitivity measures how accurate it is when the subjects who are being tested are already known to have the condition (eg. are definitely known to be pregnant before the test is even administered). Quite obviously women who buy HPT kits do so to find out whether or not they're pregnant. Likewise the reliability of the test when the results come back negative is the P(not pregnant | test is negative). Contrast this with specificity of HPT which is P(test is negative | not pregnant). So the reliability indicators we're interested in are the converse of sensitivity and specificity.
To help us derive all the numbers, we shall enlist the help of a spreadsheet. Take a look at this 2x2 contingency table (it would be best if you open a new tab or window on your browser for the spreadsheet so you can easily switch between this text and the tables). Although I can simply give you the equations for P(being pregnant | test is positive) and P(not pregnant | test is negative), using a table is far more illuminating and easier to understand. But if you would rather have the formulas, I've included them in the spreadsheet--look for the rows "true positive" and "true negative."
Although there are clearly more than two rows and two columns in the table it's described as 2x2 because the two variables have two states/levels each. In the case of HPT the variables are pregnancy and HPT test result. Their states are: pregnant, not pregnant, test result positive, test result negative.
The equations to compute for the values of the cells are provided. The values for cells which don't have explicit equations for them can easily be derived after the other cells have been filled. Keep in mind that except for the last table which uses user-provided sample size, the cells of the contingency tables all contain probabilities. Therefore, they will only have values from 0 to 1, inclusive.
Now scroll down to the row labeled "EXAMPLE." I've already plugged in the base rate, sensitivity and specificity for HPT kits (see yellow-colored cells). Focus your attention on the purple cells. As you can see when HPT says a woman is pregnant, there's only a 13.64% chance that it's true. Thus the false positive rate = 86.36%. Out of a hundred times the test comes out positive, 86 of them will be false alarms. Not very comforting at all.
But look at HPT's true negative rate--P(not pregnant | test negative). It's 98.28%. When the test tells a woman she hasn't conceived, then it will be wrong only 2 out of 100 times. Now that's reliable.
Before moving on, I suggest creating a new google spreadsheet so you can play around with the inputs and see how the probabilities change. Go to https://docs.google.com/ (and sign in if you need to). Click on the "Create new" button on the upper left hand corner of the window. A drop menu will appear. Click on "Spreadsheet." Go to the contingency table and click on "View" on the menu bar on top. Click "Show all formulas" on the drop menu. Press CTRL-A to select the entire table and then CTRL-C to copy it. Go back to your new blank spreadsheet and press CTRL-V to paste. You will have to manually join the various cells for those whose text is too long to fit one cell (and so it wraps down). Do this by selecting the cells on the row which you want to join and then click on the merge icon (it's the square one with left and right facing arrows). To "unmerge" select the merged cells and click the icon.
Now you can change the values of the yellow colored cells--base rate, sensitivity, specificity, sample size--and watch how the values in the tables below them change. Try inputting a value of 1 for specificity and watch how false positives are completely eliminated no matter what the sensitivity is.
One of the reasons that HPT (or for that matter, other tests) are prone to false positives (i.e., test result is positive but it's wrong--it should actually be negative) is because the base rate is rather low. And as we decrease the base rate--keeping sensitivity and specificity constant--the higher the false positive rate goes up. Try tweaking the base rate on your spreadsheet and see what happens.
If low base rate decreases true positive rates, then higher base rates should increase it, right? Remember we assumed that the base rate, thus P(getting pregnant ) is 0.05? Well, our assumption is that the woman doesn't know her ovulation cycle--she doesn't know the time of the month she's fertile. Now suppose she does. Suppose she knows exactly the six-day ovulation period when she can conceive [see note 2]. If she then has coitus during this period, the chances of her conceiving jumps to approximately 20% (it's actually between 10% to 33% depending on the day). Given that the base rate has increased, the reliability of HPT should increase as well. If we change the base rate to 0.20, the false positive rate drops to 57.14%. If the test comes out positive, it's nearly a coin toss whether one is really pregnant. On the hand, look at how the false negative rate has been affected. It's increased from 1.72% to 7.69%. Moral is: You can't have your cake and eat it too.
Let's move on and use another real life example. Fecal occult blood test (FOBT) is a kit which can be used at home to screen for, among other conditions, possible colorectal cancer (CRC). As with HPT, FOBT kits from different manufacturers vary in their sensitivity and specificity. Let's take an "average" FOBT which has a sensitivity = 65% and specificity = 95%. How often (or rarely) does CRC occur in the population? Given the prevalence in 2007 we can estimate it to be 0.37% [see note 3].
Plugging those values in our spreadsheet we get the following:
True positive rate = 4.61%
False positive rate = 95.39%
True negative rate = 99.86%
False negative rate = 0.14%
When FOBT comes out negative we can almost be proof positive that we are CRC-free. But when it comes out positive, further testing is necessary to confirm/refute the initial screening. The low base rate means this test is prone to false positives.
----
Notes:
1. Home pregnancy test sensitivity and specifity values: "The results we have suggest that for every four women who use such a test and are pregnant, one will get a negative test result. It also suggests that for every four women who are not pregnant, one will have a positive test result." In other words, P(test negative | pregnant) and P(test positive | not pregnant) are both 1/4. P(test positive | pregnant), i.e., sensitivity, and P(test negative | not pregnant), i.e., specificity of the test, are the complements of these values: 1 - 1/4 = 0.75.
It is important to note that different brands of HPT kits have varying sensitivities and specificities.
2. Pregnancy base rate: During the 6-day ovulation period the probability of becoming pregnant is between 10 to 33%. Computing for a simplistic average we get approx 20% for P(pregnancy | coitus during the 6-day ovulation period). According to the study there is no conception if coitus is outside this 6-day period. Therefore the P(pregnancy | coitus outside the 6-day ovulation period) = 0. Since the ovulation period is 6 days and there are 30 days/month, the probability that any randomly picked day is within the 6-day ovulation period = 6/30 = 1/5 = 0.2. We need to find the probability of getting pregnant regardless of when coitus is performed, that is we need to determine P(pregnancy).
Let:
p = pregnancy
p' = no pregnancy
v = coitus during 6-day ovulation period
v' = coitus outside the 6-day ovulation period
P(v) = 0.2
P(v') = 1 - P(v) = 0.8
P(p|v) = 0.25
P(p|v') = 0
P(p & v) = P(v) * P(p|v) = 0.2 * 0.25 = 0.05
P(p & v') = P(v') * P(p|v') = 0.8 * 0 = 0
P(p) = P(p & v) + P(p & v') = 0.05 + 0 = 0.05
I actually used a tree diagram (with ovulation period as the first two branches) and a 2x2 table as in the spreadsheet to aid in determining the probabilities. The above equations are a distillation of that graphical and tabular process.
3. Prevalence of colorectal cancer: "On January 1, 2007, in the United States there were approximately 1,112,493 men and women alive who had a history of cancer of the colon and rectum -- 540,636 men and 571,857 women. This includes any person alive on January 1, 2007 who had been diagnosed with cancer of the colon and rectum at any point prior to January 1, 2007 and includes persons with active disease and those who are cured of their disease." In 2007 the population of the US was 302.2 million. Therefore we can compute an estimate of the base rate for CRC: 1.112million / 302.2million = 0.0037.
-------------
How reliable are home pregnancy test (HPT) kits? Given 100 who've just conceived, 75 of them will test positive. And given 100 who aren't pregnant, 75 will correctly test negative [see note 1]. Assume the chances of conception when coitus is performed at some randomly chosen day of the month is 5% [see note 2]. If the HPT result comes out positive should the woman panic and faint if it's an unwanted pregnancy? Or if she's been trying to be with child for years, should she immediately broadcast the news on Facebook, Twitter, and email every friend and kin she has?
Before finding out the answer, let's introduce some important terms. The base rate is the prevalence rate or frequency of a disease, condition or phenomenon in the population. Sensitivity is the rate/frequency/probability a medical screening test will result in a positive finding given that the person being tested has the condition or disease which the test is for. In probability notation sensitivity can be denoted as P(test is positive | person has the condition). That's read as "the probability that the test comes out positive given the person has the condition." The vertical bar is read as "given." Specificity is the rate/frequency/probability a medical screening test will result in a negative finding given that the person being tested does not have the condition or disease which the test is for. In other words, specificity is P(test is negative | person doesn't have the condition).
So for HPT the base rate = 0.05, sensitivity = 75/100 = 0.75, and specificity = 75/100 = 0.75.
What we're interested in finding out is how reliable the test is, ie., how accurate it is when the test comes out positive and when it comes out negative. In essence we want to know P(being pregnant | test is positive) and P(not being pregnant | test is negative). These are two values and they needn't be the same. If the values are close to 1 (e.g. > 0.90) then we can say that HPT is reliable.
Notice that when we're talking of reliability of the test when it results in a positive finding we are looking at P(being pregnant | test is positive). This is not the same as the sensitivity of HPT which is P(test is positive | person is pregnant). The test's sensitivity measures how accurate it is when the subjects who are being tested are already known to have the condition (eg. are definitely known to be pregnant before the test is even administered). Quite obviously women who buy HPT kits do so to find out whether or not they're pregnant. Likewise the reliability of the test when the results come back negative is the P(not pregnant | test is negative). Contrast this with specificity of HPT which is P(test is negative | not pregnant). So the reliability indicators we're interested in are the converse of sensitivity and specificity.
To help us derive all the numbers, we shall enlist the help of a spreadsheet. Take a look at this 2x2 contingency table (it would be best if you open a new tab or window on your browser for the spreadsheet so you can easily switch between this text and the tables). Although I can simply give you the equations for P(being pregnant | test is positive) and P(not pregnant | test is negative), using a table is far more illuminating and easier to understand. But if you would rather have the formulas, I've included them in the spreadsheet--look for the rows "true positive" and "true negative."
Although there are clearly more than two rows and two columns in the table it's described as 2x2 because the two variables have two states/levels each. In the case of HPT the variables are pregnancy and HPT test result. Their states are: pregnant, not pregnant, test result positive, test result negative.
The equations to compute for the values of the cells are provided. The values for cells which don't have explicit equations for them can easily be derived after the other cells have been filled. Keep in mind that except for the last table which uses user-provided sample size, the cells of the contingency tables all contain probabilities. Therefore, they will only have values from 0 to 1, inclusive.
Now scroll down to the row labeled "EXAMPLE." I've already plugged in the base rate, sensitivity and specificity for HPT kits (see yellow-colored cells). Focus your attention on the purple cells. As you can see when HPT says a woman is pregnant, there's only a 13.64% chance that it's true. Thus the false positive rate = 86.36%. Out of a hundred times the test comes out positive, 86 of them will be false alarms. Not very comforting at all.
But look at HPT's true negative rate--P(not pregnant | test negative). It's 98.28%. When the test tells a woman she hasn't conceived, then it will be wrong only 2 out of 100 times. Now that's reliable.
Before moving on, I suggest creating a new google spreadsheet so you can play around with the inputs and see how the probabilities change. Go to https://docs.google.com/ (and sign in if you need to). Click on the "Create new" button on the upper left hand corner of the window. A drop menu will appear. Click on "Spreadsheet." Go to the contingency table and click on "View" on the menu bar on top. Click "Show all formulas" on the drop menu. Press CTRL-A to select the entire table and then CTRL-C to copy it. Go back to your new blank spreadsheet and press CTRL-V to paste. You will have to manually join the various cells for those whose text is too long to fit one cell (and so it wraps down). Do this by selecting the cells on the row which you want to join and then click on the merge icon (it's the square one with left and right facing arrows). To "unmerge" select the merged cells and click the icon.
Now you can change the values of the yellow colored cells--base rate, sensitivity, specificity, sample size--and watch how the values in the tables below them change. Try inputting a value of 1 for specificity and watch how false positives are completely eliminated no matter what the sensitivity is.
One of the reasons that HPT (or for that matter, other tests) are prone to false positives (i.e., test result is positive but it's wrong--it should actually be negative) is because the base rate is rather low. And as we decrease the base rate--keeping sensitivity and specificity constant--the higher the false positive rate goes up. Try tweaking the base rate on your spreadsheet and see what happens.
If low base rate decreases true positive rates, then higher base rates should increase it, right? Remember we assumed that the base rate, thus P(getting pregnant ) is 0.05? Well, our assumption is that the woman doesn't know her ovulation cycle--she doesn't know the time of the month she's fertile. Now suppose she does. Suppose she knows exactly the six-day ovulation period when she can conceive [see note 2]. If she then has coitus during this period, the chances of her conceiving jumps to approximately 20% (it's actually between 10% to 33% depending on the day). Given that the base rate has increased, the reliability of HPT should increase as well. If we change the base rate to 0.20, the false positive rate drops to 57.14%. If the test comes out positive, it's nearly a coin toss whether one is really pregnant. On the hand, look at how the false negative rate has been affected. It's increased from 1.72% to 7.69%. Moral is: You can't have your cake and eat it too.
Let's move on and use another real life example. Fecal occult blood test (FOBT) is a kit which can be used at home to screen for, among other conditions, possible colorectal cancer (CRC). As with HPT, FOBT kits from different manufacturers vary in their sensitivity and specificity. Let's take an "average" FOBT which has a sensitivity = 65% and specificity = 95%. How often (or rarely) does CRC occur in the population? Given the prevalence in 2007 we can estimate it to be 0.37% [see note 3].
Plugging those values in our spreadsheet we get the following:
True positive rate = 4.61%
False positive rate = 95.39%
True negative rate = 99.86%
False negative rate = 0.14%
When FOBT comes out negative we can almost be proof positive that we are CRC-free. But when it comes out positive, further testing is necessary to confirm/refute the initial screening. The low base rate means this test is prone to false positives.
----
Notes:
1. Home pregnancy test sensitivity and specifity values: "The results we have suggest that for every four women who use such a test and are pregnant, one will get a negative test result. It also suggests that for every four women who are not pregnant, one will have a positive test result." In other words, P(test negative | pregnant) and P(test positive | not pregnant) are both 1/4. P(test positive | pregnant), i.e., sensitivity, and P(test negative | not pregnant), i.e., specificity of the test, are the complements of these values: 1 - 1/4 = 0.75.
It is important to note that different brands of HPT kits have varying sensitivities and specificities.
2. Pregnancy base rate: During the 6-day ovulation period the probability of becoming pregnant is between 10 to 33%. Computing for a simplistic average we get approx 20% for P(pregnancy | coitus during the 6-day ovulation period). According to the study there is no conception if coitus is outside this 6-day period. Therefore the P(pregnancy | coitus outside the 6-day ovulation period) = 0. Since the ovulation period is 6 days and there are 30 days/month, the probability that any randomly picked day is within the 6-day ovulation period = 6/30 = 1/5 = 0.2. We need to find the probability of getting pregnant regardless of when coitus is performed, that is we need to determine P(pregnancy).
Let:
p = pregnancy
p' = no pregnancy
v = coitus during 6-day ovulation period
v' = coitus outside the 6-day ovulation period
P(v) = 0.2
P(v') = 1 - P(v) = 0.8
P(p|v) = 0.25
P(p|v') = 0
P(p & v) = P(v) * P(p|v) = 0.2 * 0.25 = 0.05
P(p & v') = P(v') * P(p|v') = 0.8 * 0 = 0
P(p) = P(p & v) + P(p & v') = 0.05 + 0 = 0.05
I actually used a tree diagram (with ovulation period as the first two branches) and a 2x2 table as in the spreadsheet to aid in determining the probabilities. The above equations are a distillation of that graphical and tabular process.
3. Prevalence of colorectal cancer: "On January 1, 2007, in the United States there were approximately 1,112,493 men and women alive who had a history of cancer of the colon and rectum -- 540,636 men and 571,857 women. This includes any person alive on January 1, 2007 who had been diagnosed with cancer of the colon and rectum at any point prior to January 1, 2007 and includes persons with active disease and those who are cured of their disease." In 2007 the population of the US was 302.2 million. Therefore we can compute an estimate of the base rate for CRC: 1.112million / 302.2million = 0.0037.
Tuesday, January 11, 2011
A very sick doctor
According to Eduardo Cabantog he's an MD who's served in various state-run hospitals in the Philippines. Some years ago he founded Alliance in Motion (AIM) Global, Inc.--a multi level marketing (MLM) company. He's never looked back since. And if his presentation (see endnote 1) is any gauge, he's actually turned his back on the ethics of being a doctor.
AIM is into food/dietary supplements. But I'm not even going into the efficacy question regarding the items the company is peddling. This blog entry is about a (different) scam Cabantog has been perpetrating during his presentation(s).
Watch the following video begining at around 2:40 (you can see how the older gentlemen eventually fared in Part 8 of the presentation below).
The two men are easily thrown off balance by Cabantog when he performs his drag-the-person-down test. But after taking Alive! supplement (manufactured by Nature's Way) the two subjects become incredibly immovable. Try as he did Cabantog could not get them to budge. Amazing, isn't it? Well, it would be if Cabantog wasn't resorting to outright flim flam. Here's Richard Saunders of Australian Skeptics showing us how to do this party trick (see endnote 2). The reveal begins @5:10.
The direction in which the forces are applied is key. Push/pull a person away from where he's standing and it's easy to throw him off balance. The opposite is also true. Moreover, the lack of any form of blinding ensures that the tester can apply as much or as little force as he wants depending on whether the subject has taken the supplement or not. Similarly, the subject, knowing that's he's taken the supplement and having already been primed by Cabantog's prior presentation on the benefits of the wonder supplement, can apply more or less resistance. The subject--just like a patient--reflexively wants to please the authority figure and to play along specially given the large audience present (yes, there is peer pressure at work). Given how Cabantog has full control of the vectors--the magnitude and direction of the forces he's going to apply--and given how the subjects can more or less be counted on to perform as expected of a good albeit unwitting shill, there is almost no way this trick can go wrong.
Let's move on. Watch the following, paying close attention to the portion of the video from 2:16 - 3:03 and 4:34 - 5:20. Watch both a few times and see if you notice something peculiar.
Did you see it? Those two segments are merely mirror images of one another. And we know the first is the original because @2:23 to 2:25 we see the label of the lancet (the penlike device for pricking the finger)--the first few letters being "Lan"--flipped horizontally @4:41 to 4:43.
Moreover, if you listen carefully to the audio (best if you don't look at the vid) you'll hear the same one in both the first and second segments (although the first few seconds from the first don't make it to the second; in its place is a short part of the audio downstream, thus resulting in that portion being repeated twice in the second segment). You'll even hear the very same car horn @2:47 and @5:05.
In the first segment, the supposed blood sample taken is one before the subject took Alive! In the second, a blood sample after the supplement was taken. But as we've seen AIM Global merely provided a flipped version of the very same blood sampling. If AIM actually took blood samples from a subject in the audience before and after taking Alive! and if the glass slide showing freely moving RBC is from the same subject, then because they don't have the video clip for the second sampling there is reason to believe AIM is hiding something. Perhaps they don't want to show that they added a substance to the second blood sample (which they didn't to the first) because "Rouleaux and clumping occur when blood is placed under a microscope without first being suspended in proper solutions to control acidity and agglutination." Is AIM going to give us the lame excuse that they forgot to turn on the camera during the second blood sampling or that that video segment got corrupted?
Finally I found the following images in a Facebook album that apparently is open for all Facebook members to see and comment on. They're ads for AIM's latest product C24/7, which apparently is an improved Alive! since instead of just 16,000 "phytonutrients" C24/7 contains 22,000. Given the weight of one capsule and this number of chemicals you can do the math to find out how many micrograms of each there is per cap (assuming AIM isn't pulling our legs--again--with that 22k figure).
Whether these ads were made by AIM Global or by individuals (re)selling the supplement, I don't know. Since these ads make explicit therapeutic claims, they are in clear violation of the laws of the land (among others, RA 7394 Art.112 and the Department of Health regulations)
----
Notes:
1. Alliance in Motion (AIM) Global presentation featuring Eduardo Cabantog, MD
Part 1/10 Part 2/10 Part 3/10 Part 4/10 Part 5/10 Part 6/10 Part 7/10 Part 8/10 Part 9/10 Part 10/10
2. This trick was also employed by, among others, Power Balance (Australia), which recently had to admit (thanks to the Australian Competition and Consumer Commission) that its wild claims for its rubber bracelet are bogus:
AIM is into food/dietary supplements. But I'm not even going into the efficacy question regarding the items the company is peddling. This blog entry is about a (different) scam Cabantog has been perpetrating during his presentation(s).
Watch the following video begining at around 2:40 (you can see how the older gentlemen eventually fared in Part 8 of the presentation below).
The two men are easily thrown off balance by Cabantog when he performs his drag-the-person-down test. But after taking Alive! supplement (manufactured by Nature's Way) the two subjects become incredibly immovable. Try as he did Cabantog could not get them to budge. Amazing, isn't it? Well, it would be if Cabantog wasn't resorting to outright flim flam. Here's Richard Saunders of Australian Skeptics showing us how to do this party trick (see endnote 2). The reveal begins @5:10.
The direction in which the forces are applied is key. Push/pull a person away from where he's standing and it's easy to throw him off balance. The opposite is also true. Moreover, the lack of any form of blinding ensures that the tester can apply as much or as little force as he wants depending on whether the subject has taken the supplement or not. Similarly, the subject, knowing that's he's taken the supplement and having already been primed by Cabantog's prior presentation on the benefits of the wonder supplement, can apply more or less resistance. The subject--just like a patient--reflexively wants to please the authority figure and to play along specially given the large audience present (yes, there is peer pressure at work). Given how Cabantog has full control of the vectors--the magnitude and direction of the forces he's going to apply--and given how the subjects can more or less be counted on to perform as expected of a good albeit unwitting shill, there is almost no way this trick can go wrong.
Let's move on. Watch the following, paying close attention to the portion of the video from 2:16 - 3:03 and 4:34 - 5:20. Watch both a few times and see if you notice something peculiar.
Did you see it? Those two segments are merely mirror images of one another. And we know the first is the original because @2:23 to 2:25 we see the label of the lancet (the penlike device for pricking the finger)--the first few letters being "Lan"--flipped horizontally @4:41 to 4:43.
Moreover, if you listen carefully to the audio (best if you don't look at the vid) you'll hear the same one in both the first and second segments (although the first few seconds from the first don't make it to the second; in its place is a short part of the audio downstream, thus resulting in that portion being repeated twice in the second segment). You'll even hear the very same car horn @2:47 and @5:05.
In the first segment, the supposed blood sample taken is one before the subject took Alive! In the second, a blood sample after the supplement was taken. But as we've seen AIM Global merely provided a flipped version of the very same blood sampling. If AIM actually took blood samples from a subject in the audience before and after taking Alive! and if the glass slide showing freely moving RBC is from the same subject, then because they don't have the video clip for the second sampling there is reason to believe AIM is hiding something. Perhaps they don't want to show that they added a substance to the second blood sample (which they didn't to the first) because "Rouleaux and clumping occur when blood is placed under a microscope without first being suspended in proper solutions to control acidity and agglutination." Is AIM going to give us the lame excuse that they forgot to turn on the camera during the second blood sampling or that that video segment got corrupted?
Finally I found the following images in a Facebook album that apparently is open for all Facebook members to see and comment on. They're ads for AIM's latest product C24/7, which apparently is an improved Alive! since instead of just 16,000 "phytonutrients" C24/7 contains 22,000. Given the weight of one capsule and this number of chemicals you can do the math to find out how many micrograms of each there is per cap (assuming AIM isn't pulling our legs--again--with that 22k figure).
Whether these ads were made by AIM Global or by individuals (re)selling the supplement, I don't know. Since these ads make explicit therapeutic claims, they are in clear violation of the laws of the land (among others, RA 7394 Art.112 and the Department of Health regulations)
----
Notes:
1. Alliance in Motion (AIM) Global presentation featuring Eduardo Cabantog, MD
Part 1/10 Part 2/10 Part 3/10 Part 4/10 Part 5/10 Part 6/10 Part 7/10 Part 8/10 Part 9/10 Part 10/10
2. This trick was also employed by, among others, Power Balance (Australia), which recently had to admit (thanks to the Australian Competition and Consumer Commission) that its wild claims for its rubber bracelet are bogus:
Saturday, January 01, 2011
If you don't have a Nobel, I'll have none of your ideas
A recent example of an ad hominem argument.
After being provided Neil deGrasse Tyson's "Perimeter of Ignorance" paper (see footnote) which puts Newton, Einstein, et al's religious sentiments into perspective, instead of critiquing the paper a believer went off questioning NDT's "smartness": "Did he ever win a Nobel Prize? Please elaborate why [we] would think that he is smarter or wiser than Albert Einstein or Sir Isaac Newton." And even after being told that "an idea stands and falls on its merit, not from whom it came from," this believer forges on, none the wiser, and hauls out NDT's curriculum vitae and resume from wikipedia, placing emphasis on NDT's being "a frequent Guest on the Game Show called Jeopardy," finally concluding "I am sorry but his greatest 'Achievements' are certainly nothing compared to the 'Achievements' of Albert Einstein, and Sir Isaac Newton. Are you kidding Sir???"
As to whether this person even bothered to read NDT's essay is unclear. He certainly does not even broach the ideas contained therein.
What is an ad hominem? It consists of "attacking the person instead of attacking his argument" [link]. "This tactic is logically fallacious because insults and even true negative facts about the opponent's personal character have nothing to do with the logical merits of the opponent's arguments or assertions" [link].
An ad hominem argument has the following basic form:
1. Person A makes claim X.
2. Person B makes an attack on person A.
3. Therefore A's claim is false.
[link]
Person A corresponds to NDT and person B to the believer.
One crucial question is whether B would've said what he did if NDT's essay had been, say, entitled "The Perimeter of Enlightenment" and had hailed Newton et al. as being warranted in invoking an intelligent designer for those mysteries which at the time eluded scientific explanation. Well apparently not, since when confronted with this very question, B simply refused to answer.
When you think about it, science would never advance and would become dogmatic and a cult worshiping the pronouncements of a few if the ad hominem rule were to prevail. Young scientists and thinkers would never get off the ground since they have yet to make their mark in history by adding to the pool of knowledge or even overthrowing wrong ideas by their esteemed predecessors. So when their ideas are at odds with the status quo they would be compared with the giants who've come before them and since their CV and resume are practically blank they would be pooh-poohed and never given an audience. Their ideas would be trampled upon and deemed heretical simply because they have no Nobels or awards to show off, no celebrity status, no biographies, no books written about them, none of the "blings" to show they're smarter and oh so important.
And rather obviously, scientists win Nobel prizes after their ideas have proved to be groundbreaking and shown to be true, and not because they scored 300 on their IQ test or because their CV is a hundred pages thick or that they didn't frequent game shows. Had we applied the above believer's criteria for judging ideas we would've said of Einstein when he came up with his relativity theory (thus supplanting Newtonian mechanics) at the beginning of the 20th century: "Einstein was a patent office clerk! Are you kidding me?! Has he won a Nobel Prize? Please elaborate why we should think he's smarter or wiser than Sir Isaac Newton?"
-----
Far more entertaining than the paper is Tyson's Beyond Belief 2006 talk with the same title: Part 1 Part 2 Part 3 Part 4
After being provided Neil deGrasse Tyson's "Perimeter of Ignorance" paper (see footnote) which puts Newton, Einstein, et al's religious sentiments into perspective, instead of critiquing the paper a believer went off questioning NDT's "smartness": "Did he ever win a Nobel Prize? Please elaborate why [we] would think that he is smarter or wiser than Albert Einstein or Sir Isaac Newton." And even after being told that "an idea stands and falls on its merit, not from whom it came from," this believer forges on, none the wiser, and hauls out NDT's curriculum vitae and resume from wikipedia, placing emphasis on NDT's being "a frequent Guest on the Game Show called Jeopardy," finally concluding "I am sorry but his greatest 'Achievements' are certainly nothing compared to the 'Achievements' of Albert Einstein, and Sir Isaac Newton. Are you kidding Sir???"
As to whether this person even bothered to read NDT's essay is unclear. He certainly does not even broach the ideas contained therein.
What is an ad hominem? It consists of "attacking the person instead of attacking his argument" [link]. "This tactic is logically fallacious because insults and even true negative facts about the opponent's personal character have nothing to do with the logical merits of the opponent's arguments or assertions" [link].
An ad hominem argument has the following basic form:
1. Person A makes claim X.
2. Person B makes an attack on person A.
3. Therefore A's claim is false.
[link]
Person A corresponds to NDT and person B to the believer.
One crucial question is whether B would've said what he did if NDT's essay had been, say, entitled "The Perimeter of Enlightenment" and had hailed Newton et al. as being warranted in invoking an intelligent designer for those mysteries which at the time eluded scientific explanation. Well apparently not, since when confronted with this very question, B simply refused to answer.
When you think about it, science would never advance and would become dogmatic and a cult worshiping the pronouncements of a few if the ad hominem rule were to prevail. Young scientists and thinkers would never get off the ground since they have yet to make their mark in history by adding to the pool of knowledge or even overthrowing wrong ideas by their esteemed predecessors. So when their ideas are at odds with the status quo they would be compared with the giants who've come before them and since their CV and resume are practically blank they would be pooh-poohed and never given an audience. Their ideas would be trampled upon and deemed heretical simply because they have no Nobels or awards to show off, no celebrity status, no biographies, no books written about them, none of the "blings" to show they're smarter and oh so important.
And rather obviously, scientists win Nobel prizes after their ideas have proved to be groundbreaking and shown to be true, and not because they scored 300 on their IQ test or because their CV is a hundred pages thick or that they didn't frequent game shows. Had we applied the above believer's criteria for judging ideas we would've said of Einstein when he came up with his relativity theory (thus supplanting Newtonian mechanics) at the beginning of the 20th century: "Einstein was a patent office clerk! Are you kidding me?! Has he won a Nobel Prize? Please elaborate why we should think he's smarter or wiser than Sir Isaac Newton?"
-----
Far more entertaining than the paper is Tyson's Beyond Belief 2006 talk with the same title: Part 1 Part 2 Part 3 Part 4
Subscribe to:
Posts (Atom)