Unhealthful News 164 - Taking scientific advice requires some scientific skill

On a slightly tangential note, but a quite critical point when it comes to health policy, the New York Times reported today that a quarter of US state legislators lack a four-year university degree.  The report does not tell us how many more majored in highly non-liberal-arts and non-science fields (accounting, pre-law, business management, fraternity parties, etc.) that provide no basis for assessing scientific claims.  But it is these legislators that make a lot of scientific, including health policy, decisions on behalf of people.

Yes, of course, a degree does not always correspond to knowledge and ability.  Bill Gates did fine without a bachelor's degree, though some of the priorities of his foundation seem to suffer problems similar to those found in state health policy.  And on the other side:
“I don’t think it’s imperative that you have a college degree to be effective,” said Mike Fletcher, a retired state trooper elected to the Arkansas Senate last year. “I think the most important thing is to have common sense.”
But there is a strong correlation between ability to make particular scientific assessments and the ability to finish a degree in a non-rote field.  Common sense does not go very far when trying to figure out if the benefits outweigh the costs of making a meningitis vaccine mandatory.  It does not even seem to help retired cops make sensible decisions about drug policy.

The obvious response is that they have people who figure out such things for them.  The problem is that the further away someone is from understanding a scientific matter themselves, the more likely they are to believe someone who is not giving them accurate information, either out of ignorance or a hidden agenda.

You have to know something to even know who you should believe.

A policy maker who has absolutely no clue about scientific epistemology will depend on Wikipedia or 24-year-old aides (who will go to Wikipedia) to tell them what to think.  Even if it is not literally Wikipedia, it is some other source at that level, like news reporters or a local advocacy group, that interprets science at the level of what shows up in the conclusion sentence of research papers abstracts.  As readers of this blog know, such claims are not reliable in health science.  Indeed, Wikipedia and most news outlets intentionally cultivate this kind of uncritical-acceptance-based behavior. 

On a few occasions I have tried to correct errors in Wikipedia where something was once widely believed to be true, but was now shown to not be true (and, I think in all those cases, was never actually based on evidence – it was just one of those conventional wisdom problems).  But even if I made the change in terms of "it was once believed that but now it has been shown/established that…. [source]", the editor who controlled the page quickly changed it back.  I was informed, in effect, that most of what is out there on the web still presents the old view and does not acknowledge a controversy, and since science is democratic in the Wikipedia world, the old versions stands.  Given that experience I choose to focus on forums where most readers know enough to recognize at least the basic credibility of what I argue, even if it is contrary to what they thought they knew and what others claim.  My project in this blog is to figure out how to help people skip a few steps on this knowledge ladder, but that does not help much for those who do not even seek that knowledge.

The problem with knowledge at the news or Wikipedia level is that the people compiling it do not know who they should believe, or even how to distinguish when there is legitimate controversy.  Wikipedia is truly great at what other non-expert encyclopedias were always quite good at, getting non-controversial factoids correct, and it dramatically broadens the coverage (from "when did Lincoln deliver the Gettysburg Address?" to "who were the finalists in American Idol").  It is pretty good with scientific controversies that do not have much of a worldly political angle ("when did humans arrive in the New World?" "what is the definition of 'species'?").  But it and newspapers fail when it comes to current controversies in active politicized sciences that public officials need to wade into.


The Wikipedia-level authors get their information from anyone who can publish an authoritative-seeming paper.  This gets pretty close to maximum current expertise in many sciences, where people authoring study reports mostly know what they are doing and generally know who look to when they do not.  There might be disagreement over ultimate conclusions and best methods, but not complete ignorance about best methods or who the leading thinkers are.  But this is not the case in health sciences.  Most people writing the epidemiology papers, the sources of the summary "knowledge" that is used in policy, have no idea what constitutes expert thinking in epidemiology.  Thu there is yet another layer of not knowing enough to really know that makes uneducated faith in experts and "common sense" that much less likely to identify good advice.

For example, on the question of whether there are health effects from industrial wind turbines, the government of Ontario, Canada (a major hotspot in that fight) seems to put a lot of stock in the thin report on the subject by their Chief Medical Officer of Health.  (CMOH is a strange Canadian institution wherein a physician administrator type is always the province's chief public health advisor.)  I was reminded of this a couple of days ago when I saw a newspaper cite that report as if it were authoritative.  The problem is that the CMOH and her staff were in way over their heads in writing the report, and not only did not know what constitutes the available evidence, but did not know whose analysis to believe.

Funny story:  I was cross-examined by a lawyer representing Ontario at a proceeding where I had presented testimony that the CMOH report was a joke, albeit in a less combative and more detailed way, of course.  She asked me something along the lines of, "since you know so much, did you ever contact the CMOH to try to provide useful input into the writing of the document?"  It boggles the mind.  I expect it would require more search and processing power than Google has to be able to identify any time someone is writing a supposedly expert report that is beyond their capability, and then direct the real experts to proactively contribute to it.  It seems more promising for report writers to track down the experts and ask for input.  Of course, they have to know who to even ask.

The situation in Ontario is that the lawmakers trust an authoritative sounding government official who knows more than they do but is far from an expert in science, and in turn she does not know who to believe or how to interpret it.  Perhaps those who she believed know who are really expert, but they have shown no evidence of that.  I am not sure whether Ontario legislators follow the same pattern of education as Americans, but it really would not take much scientific understanding, when coupled with a bit of partisan education (lobbying) in the subject matter, to realize that the CMOH report is worthless.  But if the local lawmakers do not have the skills to understand (when given some information and advice about thinking in the spirit of what I do in this blog) when their "experts" are giving them bad information, it does not really help much that true expertise exists, merely a few layers away.