What are we to make of it when we read a story of an apparently out-of-the-blue discovery that, according to a new study, some exposure apparently causes (or prevents) some disease? This is the classic "there they go again" moment for most readers of the news. It is why so many people, quite reasonably, tune out the random epidemiologic claim of the day. For example, how shall we think about this story about how acetaminophen (Tylenol) causes cancer of the blood (in old people and in very high doses)?
There are three main categories of what a result like this could mean. The first is that the claim is as legitimate as it can be, given the limits of available knowledge. The second is that the data do not actually support the claim very well, but the researchers and their touts ignored obvious potential errors, cheated with the statistical methods, and otherwise tried to support some claim they wanted to make. This is what I most often write about. The third, a particular problem with these random epidemiologic results that seem to come out of nowhere, is that the researchers were fishing through a dataset that offers many potential associations and found one, reporting it as if it were exactly the single result they were looking for.
Today I am focusing on the last of those, and the question of how we might figure out if it is occurring. I chose a news report of a claim that I have almost no substantive expertise about, so I have to be guided by epistemic principles and not topical knowledge.
First, it is important to understand the implications of data fishing expeditions. They are actually quite similar to the problem of dealing with cancer clusters that I wrote about before: Given a lot of possible correlations and random sampling error (even ignoring all other types of error), many associations will show up in a dataset due to chance alone. Moreover, those statistics that are supposed to tell you whether something showed up from chance alone (e.g., it is "statistically significant") do not mean what you think they mean when someone was fishing through the data. Don't be offended – I am not picking on my readers. I can tell you that most people publishing in the health literature also do not understand that those statistics do not mean what they think they mean.
However, there will be associations in the data that are not caused by chance. Most will be the ones we already know about (e.g., men will get most dieseases more often than women of the same age). But – and here is the difficult part – some will be true causal relationships that we did not previously realize existed. That means that having found the association of acetaminophen and blood cancer, we should be more inclined to believe that acetaminophen causes blood cancer than we were before. But how much more?
The answer has a lot to do with how much we think the result was a fishing expedition: The more fishing there was, the less we should change our beliefs. And if there was little reason to believe it is true based on previously available information, then a small change in beliefs leaves us far from believing. For example, if the study were a directed research project to see if acetaminophen causes blood cancer, based on a good theory and some previous studies that suggested it was true, we do not have th fishign problem. We might have the problems that are in the second category in the above list, but not the blind fishing problem. "Directed study" does not describe the current study, however, which used a dataset that was collected to study vitamins and the like, and primarily collected data on acetaminophen as a control variable.
Aha, so the data collection was not designed to answer this question, so we should doubt the claim, right? Well, no. If the new analysis was specific and directed, and driven by the desire to answer the specific question, then there is no problem (other than, perhaps, the optimal data collection methods might not have been used for the variables of greatest interest because they were not considered important originally). The intentions of the data collectors do not matter, but the intention and methods of those who found the association do.
So, we can figure that out, can't we? Well, no, not usually. The problem is that approximately 100% of the time, researchers who fish for and find an associations mislead the reader and imply that they were always interested in only that association. Just consider the abstract of the present study. Someone reading that (and not figuring out that this must be a repurposing of the "Vitamins and Lifestyle study" data merely from the name of the database) would get the impression that everything that was done was focused on exploring this one relationship. Perhaps that is true (except for the original data collection) but we cannot tell. Even when an association was found through fishing, authors consistently backfill an introduction that suggests they were motivated by previous knowledge about the topic. Do not believe it when someone tells you "this is biologically plausible, and there was previous suggestive evidence, so we should believe it" – it is always possible to find some suggestive evidence and a story that makes an effect biologically plausible.
You basically never see anyone write, "we did not have any reason to expect to see this result when we started looking, and what we found should be interpreted accordingly." Does this mean the researchers are consistently liars? That depends on what you mean by "lie". If we interpret it to mean intentionally trying to get someone to believe something you know to be false, then probably not. It is my experience that almost no epidemiologic researchers realize that what they are doing is bad science.
Occasionally someone offers some concession on the point, typically referring to the study as being "hypothesis generating". This is a nonsensical term, since no study is needed to generate a hypothesis. Also, every study offers some information about the association being studied, and never merely generates a hypothesis. The question is how much. The honest statement by the researcher would be to not hide behind weasel words and clearly state that the result should be considered to provide only minimal knowledge until the same model applied to a different dataset produces the same result. Of course, such honesty also demands not writing an abstract that implies that the study estimate is correct, let alone a press release.
Returning to the study at hand, was this perhaps a case where the researchers were genuinely interested in the specific question, and looked around for a database they could use to analyze it? Perhaps they even had in mind the hypothesis that, as they found, three specific types of blood cancer would show a positive associations with acetaminophen while the association with a fourth blood cancer would be slightly protective. But you would never know it from what gets reported in the news because a it is almost always implied that the study was always focused on the particular result, wither true or not. Worse, usually you cannot even figure it out from reading the research report unless you are a subject matter expert.
There are some clues. If someone does a case-control study (collecting people with the specific disease and comparing their exposure to similar people who do not have the disease), then you know that they were always focused on a particular disease, and so at worst they were fishing through the exposures, not both exposures and diseases. Ironically, the researchers in touting this study to the press, seemed to be bragging that their study design was better that a case-control study (because it was prospective), though this is clearly inferior in terms of an honesty check.
Aside: I think I came up with a new rule for decided that some either does not what he is taking about or is trying to bullshit us: anyone who declares that a particular study design as better than an alternative without explaining in what way it is better, and ideally also pointing out any ways in which it would be worse.
So what are we to make of this study? Certainly no one should act on it because, when assessing whether to engage in a particular behavior like taking acetaminophen, we should be interested in the sum of its effects, not one rare one. As for whether to even believe it, probably the gut reaction of most newspaper readers is not a bad proxy for correcting for the misleading effects of fishing for associations: If this is the first time you have heard something, ignore it because it is probably as useless as the hundred other things "they" tried to tell us this month.
Blog Archive
-
▼
2011
(89)
-
▼
May
(31)
- Unhealthful News 151 - Logic and proportion have f...
- Unhealthful News 150 - Understanding (some of) the...
- Unhealthful News 149 - Understanding (some of) the...
- Unhealthful News 148 - Understanding the ethics of...
- Unhealthful News 147 - Bad news about pharmaceutic...
- Unhealthful News 146 - Tobacco harm reduction stud...
- Unhealthful News 145 - Statins prevent heart attac...
- Unhealthful News 144 - New Karolinska anti-snus st...
- Unhealthful News 143 - Men gain weight as they lea...
- Unhealthful News 142 - Interpreting health science...
- Unhealthful News 141 - Follow-up: Addiction & the ...
- Unhealthful News 140 - Cigarette addiction gets a ...
- Unhealthful News 139 - USDA does something for the...
- Unhealthful News 138 - Another side of Simon Chapm...
- Unhealthful News 137 - Colonoscopy and the the rol...
- Unhealthful News 136 - Thought in the dark
- Unhealthful News 135 - Bad reviews?
- Unhealthful News 134 - Those with their own facts ...
- Unhealthful News 133 - Data fishing: a bit about h...
- Unhealthful News 132 - Question rhetoric
- Unhealthful News 131 - It took us how long to figu...
- Unhealthful News 130 - Is everything ever reported...
- Unhealthful News 129 - Precaution is sensible, the...
- Unhealthful News 128 - The value of asking "huh?" ...
- Unhealthful News 127 - What passed for smoking pol...
- Unhealthful News 126 - To the WHO, headaches only ...
- Unhealthful News 125 - Predicting usefulness of st...
- Unhealthful News 124 - Commentators question epide...
- Unhealthful News 123 - The breast cancer empire st...
- Unhealthful News 122 - More examples of why epidem...
- Unhealthful News 121 - Adamance and conflict of in...
-
▼
May
(31)
Popular Posts
-
At the invitation of guest curator Curtis Bonney, Maxine Chernoff and I recently returned from a trip to Seattle to read in the Subtext seri...
-
Have you ever refinanced your home, used the proceeds for personal use, and then claimed a tax deduction for the interest? I have some bad n...
-
The visual artist, performance artist, and poet, Ly Hoang Ly, daughter of the great poet and translator Hoang Hung, is one of 10 contempora...
-
[The poem "We've decided" was published in Nervous Songs , 1986. Fifteen years later I wrote four homophonic translations of ...
-
A little forethought and ingenuity can help keep your family from being among the more than 171,000 people involved in ladder-related accide...