No doubt we're elated to hear she's feeling much better. We now feel lighter, having carried the burden of being completely impotent to help her with her condition. But as the caffeine makes its way to every neuron in our brain we pause and ask ourselves, Is that tea actually an effective analgesic? Does it indeed have therapeutic value? Almost instantly you could hear the promoters retort: Doesn't the fact that your mom is now experiencing noticeably less pain and more mobility in her fingers support the claim that this product of ours does indeed provide relief to arthritic patients?!
Well bluntly, No! And such marketing people can't imagine how weak (and flawed) such reasoning is. If someone should come up to you and provide you anecdotal evidence of this sort, do rain--nay--pour! on their parade and ask them to consider the following.
1. Expectancy and the placebo effect. It is a well known fact that the placebo effect is real. In some cases it's been measured to be around 35%, i.e., some 35% of those who had a placebo got better compared to a group who were not given anything for their condition (Lilienfeld et al. 156, Schick & Vaughn 200). Mere expectancy that one is receiving treatment can not infrequently suffice to effect changes in the patient. It is interesting to note that it has been suggested the placebo effect for the antidepressant Prozac may be greater than the drug's pharmacological effect. So, if depressives (as yours truly is) could be fooled into believing that they're taking Prozac when in fact they've been given sugar or starch pills, it wouldn't be surprising were we to find a statistically significant number of depressions subside more quickly as compared to a group who hadn't been provided any treatment. (Be that as it may, there is the standing question as to how ethical placebo therapy is.)
In our mom's case we cannot rule out the placebo effect. It isn't unreasonable to consider that because of the extent of discomfort she was in that she had high expectations, that she believed and hoped the tea would help her. On the other hand, the mere presence of the conditions for the placebo effect does not warrant the conclusion that indeed it had been operative in this case.
2. The logical fallacies of affirming the consequent and post hoc ergo propter hoc. Consider the syllogism:
If p then q.Here's an example:
If the Daily Tribune publishes seditious articles, then the police will take over it.Logical? In Yahweh's never-never land perhaps, but not in this universe. The conclusion just does not follow from the two premisses. In fact any argument of this form--known as affirming the consequent--is not valid. The consequent here refers to q, while p is known as the antecedent. It's pretty easy to see why such an argument is fallacious. The fact that the police has taken over the Tribune does not necessarily imply that it had been taken over because the paper had published articles that were deemed seditious or what have you. It may have been taken over for other reasons.
The police took over the Tribune.
Therefore, the paper had published seditious articles.
Briefly, and to put the above in context, there are a total of four forms of arguments in this family. Two are fallacious and two are valid. The other invalid argument is the form known as denying the antecedent. The valid ones are denying the consequent (classically known as modus tollens) and affirming the antecedent (aka modus ponens). Here's an example of a modus ponens, a logical argument whose conclusion necessarily follows from the premisses:
If the Tribune publishes seditious articles, then the police will take over it.With that long-winded intro, we can see that our mom's implicit argument is in the form of affirming the consequent.*
For a week now the Tribune has been coming out with seditious stories.
Therefore, the police will take over it.
If the herbal tea is effective, then I should get better.The premisses are true: If the tea works then indeed she should get better--that's the definition of "effective." Moreover, she did get better. Nevertheless, that doesn't warrant the conclusion that the tea worked. Disappointed as she will be in being informed that her reasoning is muddled, truth be told that just because she got better doesn't necessarily imply the tea was the cause. It could've been something else.
I got better.
Therefore, the tea is effective.
This leads us to the related fallacy of post hoc ergo propter hoc--"after this, therefore, because of this." We commit this fallacy whenever we conclude that A is the cause of B simply because we observe B happening after A. As we can hopefully all appreciate, it is ridiculous for anyone to assert that because a bunch of protestors wrapped black bands around their arms and a temblor rocked the city shortly thereafter, therefore the former caused the latter. And yet day in and day out people commit this fallacy when they endorse and recommend various products and concoctions whenever they feel better after taking some medication, folk remedy, treatment, diet, etc. As with mom, their tacit reasoning is based on the post hoc fallacy.
It is true that A must occur before B if it is to be a cause of B (it can't be a cause if it comes after B), but that alone is not enough for us to conclude that it is the cause for event B. In more rigorous philosophical gobbledygook, the temporal precedence of A is necessary for it to be a cause of B, but temporal precedence alone is insufficient for A to be a cause of event B. It is obvious that innumerable things precede any event. There is no dearth of things that precede an earthquake, but just about every one of them is not the cause of the earthquake. There is, relatively, only a small subset of preceding events that in fact had anything to do with the phenomenon.
3. Confirmation bias. We all have an inherent tendency to seek out only evidence (however good or pathetic) that supports our belief (Schick & Vaughn 137). This is confirmation bias, a propensity that has to be corrected with something generally counterintuitive--looking for confuting/disconfirming evidence. If we're interested in finding out whether the tea is in fact effective against arthritis, we not only should look for supporting evidence, but we must, if we are to objectively assess its efficacy, check whether there is evidence that refutes it. And that leads us to the next item below.
4. Lack of controlled, randomized, blinded studies, and inadequate sample sizes. If as we said we are keen on getting to the bottom of things and wish to objectively evaluate the therapeutic effect of the herbal tea in question we have to look at all the evidence, not only those that support the claim or our belief about it. In this regard four crucial questions are: Given a pool of people approximately of mom's age and who've had the kind of arthritis she has for a similar length of time, how many among those who took this tea got better and how many did not? And, How many among those who did not take this tea got better and how many didn't? Without going into various details at this time, given the answers to these questions we can tabulate them in a 2 x 2 table and compute for the statistical correlation between tea intake and relief from arthritic pain. (I shall discuss 2 x 2 tables and correlation in a future article).
Note the following. We specified that those who would be included in the survey should be as similar to our mom as possible: within a narrow age range, have the same type of arthritis, have had the condition for a similar length of time. These are relevant variables (and there probably are others) which we want to keep constant throughout the pool of individuals who would be included in our study. That is important because we don't know whether differences in these factors have a bearing on the effect we're investigating, and if they do we don't want them to complicate and confound our study. We also said that we want to take into consideration those who did not take the tea. These are people who were not exposed to the suspected causative agent--the herbal tea. We need this group in order to be able to compare the proportion of those who got better from taking the tea and the proportion of those who who got better but did not take the tea. Depending on the differences between these two proportions we can then assess whether in fact relief is correlated with drinking the tea. Note that what we can determine is mere correlation and not causation. These two concepts are related but not synonymous. The presence of (a high) correlation does not necessarily imply causation. But the lack of any correlation does mean lack of causation. If A is the cause of B then a correlation must exist between A and B. But if we detect a correlation between X and Y it does not necessarily mean there is a causal relationship between X and Y.
In gathering existing data on those who've taken and not taken the tea, we had undertaken a correlational study. We had not in fact performed a causal experiment. Such a study cannot provide causal evidence.
Among the problems in the above survey is something that isn't a trivial matter, one that we've already touched on above--the placebo effect. Our correlational study above does not take that into consideration. So assuming our study shows a statistically significant correlation between those who had and had not taken the tea, then we wouldn't know whether it was the tea or the placebo effect (or something else) that was the cause. Thus, in a causal study (see below) it behooves us to go one step further and compare the proportion of those who got better from taking the herbal tea to those who got better from, say, being given ordinary tea but who'd been told they were taking the herbal tea. Only if the difference between these proportions is statistically significant can we conclude that there is evidence that the herbal tea is better than placebo (assuming that ordinary tea has no therapeutic effect, which if present confounds the results).
Among causal studies are retrospective and prospective studies. However, the strongest causal study possible that we can conduct is a randomized, double-blind test. Without going into the intricacies, to go beyond correlation and derive good evidence for a causal hypothesis, we want to, ideally, start with a large sample size of say about a thousand participants who are the same in all relevant aspects. The size of our sample is important, because the larger it is the smaller our margin of error.
We then randomly assign people from this pool either to the experimental group or the control group. The rationale here is whatever remaining differences there are in the participants will be spread evenly between the two groups. Again the target we're trying to reach here is for both groups to be the same as possible except for one variable.
We then expose those in the experimental group to the causative agent and those in the control group to a placebo. Moreover, we blind our study by withholding information from our subjects as to whether they are in the experimental or control group. Hence, they should remain unaware of whether they are taking the drug/substance or a placebo. To double-blind the study we withhold the same information from the researchers conducting the experiment including those gathering/recording the data. Double-blinding prevents researcher bias from affecting how the experiment is conducted and the collected data as well.
If analysis of the results shows that there is a statistically significant difference in the effect being observed between the two groups then we have supporting evidence that the agent under study is a causal factor. Otherwise, we conclude that our experiment failed to show any evidence for the substance's efficacy over and above the placebo effect.
5. Regression to the mean. One characteristic of diseases is the variability of their symptoms. Arthritic pain for instance doesn't remain constant throughout the years the patient is afflicted with this chronic ailment. The pain waxes and wanes. It is not just coincidental that we go see our doctor when the symptoms are rather pronounced and intrusive. But because of variability we ought to expect ourselves to feel better after feeling very poorly. Even among terminal cancer patients whose condition deteriorates over time, we see variability within the short term. One day the patient may be in so much pain and discomfort that he feels he's just bout to kick the bucket, but then twelve hours later, he's resting well. If we could quantify and graph the patient's condition we would see a downward trend indicating that he is deteriorating over time. That's the big picture. But if you look up close you'd see a ragged line zigzagging up and down, much like the graph of the hour-to-hour, daily performance of the stock market. If we could perform a regression analysis we would be able to compute for the line (or curve) of best fit and we'd see that it more or less tracks the general trend of the data points--downward sloping in this case. This line provides us the mean, or the expected value (in our example, this would be the cancer patient's condition) at any given point in time. Given the downward slope we can extrapolate that he's generally going to get worse (obviously).
What does this all mean? Because of the variability of the symptoms we should expect their degree of expression to regress toward the mean. If the arthritic pain has been particularly bad for the past couple of days, we would expect it to subside in the next few days. And if the person has been pain-free we can expect a turn in the days ahead. Analogously, if the stock market index has been going up and up in the last few days, expect a correction is in the offing, the general trend notwithstanding.
Because of all of the above (and perhaps other criticisms that can be leveled at it) the mere fact that mom got better after taking the tea is not at all sufficient evidence for us to conclude that the infusion is effective against arthritic pain. Sorry mom, but I put my money on the placebo effect and variability of the symptoms of arthritis, not on your newfound snake oil. (No mom, virgin coco oil hasn't passed muster yet either.)
It isn't easy to argue with those who've had first-hand experience and who swear by their pet cures. Persuading them to look at things scientifically and objectively may not be different from convincing supernaturalists, alien-mongers, paranormalists to do likewise. Their beliefs may be as refractory to rational arguments that emphasize the vital need for empirical evidence to back up their claims. As the late biologist and Nobel laureate Peter Medawar rightly observed, "If a person is a) poorly, b) receives treatment intended to make him better, and c) gets better, then no power of reasoning known to medical science can convince him that it may not have been the treatment that restored his health" (quoted in Stanovich 60, emphasis added).
Most people (including religionists) will appeal to testimonials as evidence for their beliefs/claims. However, not even a thousand duly notarized testimonials will ever be sound evidence for a treatment's efficacy.
If one person can commit the fallacy of false cause, so can a hundred. If one piece of evidence is invalid or unreliable, many more pieces of invalid or unreliable evidence don't make the case any stronger. This means that the many testimonials offered by practitioners or users to promote a favorite therapy generally don't prove much of anything--except perhaps that some people have strong beliefs about certain treatments. (Schick & Vaughn 203)At best testimonials/anecdotes can be a cue and starting point for launching full-scale scientific studies to investigate the matter. Ultimately, only well-designed controlled tests (preferably double-blinded, and needless to say, randomized), corroborative results from replications of these tests, as well as converging evidence from other (types of) scientific studies, can offer us a reliable base for discovering how efficacious a product/treatment really is.
Ignorance of the epistemology and methodology for understanding/investigating the true nature of empirical matters is I believe among the top reasons why people recklessly believe in various claims and why they often fail to think clearly. Without a background in scientific reasoning and the methods of science (as well as statistics in the case of therapeutic claims, among others), we cannot reliably and competently evaluate, critique, and probe claims and ideas. If we are talking of knowledge of the empirical world then only science provides us reliable knowledge. Thus, unless we are versed in scientific thinking we shall never be as confident as we can be about our beliefs, about what we regard as knowledge.
On the other hand, if we have no good, sound evidence then we must be tentative and cautious in our belief. As Hume rightly advises, we ought to proportion our belief to the evidence. And if evidence is scant and poor we would do well not to invest in it.
Finally, we must always beware of committing fallacious causal attributions, for knowing and pinpointing the causes of things is more difficult than we believe.
* Some may say that mom's argument had in fact been a modus ponens:
If I get better then it means the tea was effective.However, the conditional (the "if...then..." premiss) is not just questionable it's a fallacious premiss. It's a non causa pro causa, specifically an implicit post hoc ergo propter hoc argument. The premiss asserts that if we observe event B then it implies A was the cause. But this is something that remains to be proved. Given B we still need to rule out causes other than A that may have in fact caused B. Thus, even if the argument above is in fact deductive (and not inductive) and logically valid, it is still not a sound argument since one of its premisses is false. An argument that's logical does not imply that its conclusion is true. Only if the argument is sound--it is valid and all the premisses are known to be true--will the conclusion be necessarily true.
I got better.
Therefore, the tea was effective.
Stephen S. Carey. 1998. A Beginner's Guide to Scientific Method, 2nd ed. Belmont, CA: Wadsworth.
Scott O. Lilienfeld, Steven Jay Lynn, & Jeffrey M. Lohr, eds. 2003. Science and Pseudoscience in Clinical Psychology. New York: Guilford Press.
Theodore Schick, Jr. & Lewis Vaughn. 1999. How to Think About Weird Things: Critical Thinking for a New Age. Mountain View, CA: Mayfield.
Keith E. Stanovich. 2001. How to Think Straight about Psychology, 6th ed. Needham Heights, MA: Allyn and Bacon.