Unhealthful News 181 - Avastin likely to be de-listed for treating breast cancer; if only they explained why

This week there was some seriously unhelpful health news.  One of the most talked about bits of health news of the last few days (example, example) is that an FDA committee concluded that the drug Avastin used to treat several types of cancer, is a bad choice for treating breast cancer.  It is likely that the FDA will therefore withdrawal approval for that use.

I was curious about what was known, what the concerns were, what motivated the decision, what will happen, etc., so read some of the news and health pundit reports on the topic.  I could have kept going, and probably pursued a more technical angle, but I started to find it more interesting to realize that I was learning almost nothing from what I was reading that was not contained in the headline. 

The committee's unanimous judgment was that the risk outweighed the benefits.  The only counter-arguments discussed were breast cancer patients who believe they have benefited from the drug begging not to lose it.  Today it was announced that Medicare would still cover its use for breast cancer (because it will still be approved for other cancers, it will be on the market and FDA regulations do not prevent it from being used "off label").  This brings up some questions.

"Risks outweigh the benefits" can have rather different meanings.  One of them is the subtle "if you use this drug rather than another then, adding up all the outcomes including cures and side effects, you are a bit worse off."  That is a case where nothing much can be done except look at the overall statistics and go with them.  But there are other variations, an extreme version of which might be, "this drug saves 5% more of those who take it for a year compared to the alternative, but it rapidly kills 6% of those who take it."  There is no practical difference among these unless it is possible to figure out if someone is in the group that benefits or suffers from the choice, and presumably that has already been done based on demographics and details of the cancer to the extent currently possible. 

There is one other possibility, where it is possible to start a treatment and figure out whether it is going well.  I have no idea whether that might be possible in this case.  You would think that after reading thousands of words analyzing the regulatory decision some information might have creeped in, but no.  Some of the severe side effects mentioned (digestive system perforations, bleeding) are such that they can be detected and perhaps treatment can be changed, but perhaps it is then too late or changing the treatment is not safe.  I am sure these are known, but no on reported them. 

Some pundits complained that the news stories took many paragraphs to get to the most important point, that the studies show that the drug does not improve survival.  I agree that this is the most important point.  But I take issue with the implication that this is all that is useful to know.  There is talk about continued research to find if there is a subset of breast cancer victims who might benefit; that is what pharma companies always want to do in cases like this.  But it might be possible to identify cases that are benefiting.  Or it might not.  The news and pundits appear to be utterly silent on that point.


This is interesting because of the amount of attention devoted to women who were asking that the drug not be taken away from them because they believed it was helping.  Not a single report or analysis I saw argued made the obvious point, that they have no way to know that because they have no idea what would be happening if they were not taking the drug.  (There is evidence that the drug slows cancer growth even while not improving longevity, which would create the appearance of benefit.)  On the other hand, no one suggested it might be the case that this subset is right about them being the ones who benefit (which would mean that others have been hurt because we know the balance is "no benefit").  Perhaps it is the case that if someone escapes the nasty obvious side effects then having the drug is better than not having it.  That is kind of like the 5%/6% scenario I made up, wherein if it does not kill you early it is helping you.  Presumably this information exists, but the allusions in the news to there being some slight prognosis improvements for some stages of cancer were not very useful.

This brings up a second question, which is why anyone expert would suggest continued use is a good idea.  Individual consumers are often irrational, and mistakenly think that averages do not apply to them, and think they can see causation (that the drug is causing them to be healthier) when it is really too complicated to see.  Individual medics are no better.  But Medicare's policy decision implies that someone who is supposed to understand these things thinks it is wise to keep using Avastin to treat breast cancer.  Yet to the news reader, there seems to be no basis for expert disagreement in the discussion.  If there is no benefit, then there is no benefit.  Again, presumably I could dig deeper into expert discussions and make sense of this, but how can it be that the news reports implicitly tell us there is this controversy, but no one thinks to report the basis of disagreement?

The only consternation reported was not about the challenge of scientific disagreement, but what to do with all of those poor women who are benefiting from the drug and testified in favor of keeping the indication.  (Answer: Um, let them keep taking it if they really want it so much?  It is not being banned after all.)  The committee hearing was even called a "death trial" for them (as in "death panel"), because they were not just statistics.  Some local news stories picked up individual examples of the human interest drama of those who will be deprived of this wonderful drug.  Oh, where to start.

We expect superficial news reporting that emphasizes uninformative stories over useful statistics.  In is pretty typical that the useful statistics are buried in the article.  But in this case, nothing that was reported allows the reader to have any idea if the anecdotal claims have any basis, or should have any effect on decision making.  There is the usual spate of statements like "the plural of anecdote is not data" from self-styled pundits, but that statement is not actually true.  There are plenty of situations where anecdotes about the non-average cases are informative.  There are cases where off-label use of makes sense for an identifiable subset, and so Medicare should pay for it.  Is this one of those cases.  Maybe someone understands these points and knows the answers, but they are apparently not among those writing the news and commentary.

Unhealthful News 180 - Study of "No Smoking Day" may be a new low in bad epidemiology and health economics

Ok, that is probably not true, given how much other bad anti-tobacco "research" there is out there.  But this is a really good one.  It was so good I stole it from today's weekly readings in THR post so that I could expand on it here.

It was published in the quasi-journal Tobacco Control, of course.  I will provide the entire abstract here so you do not have to bother with the link.
How cost-effective is ‘No Smoking Day’?
D Kotz, J A Stapleton, L Owen, R West 
Participants: A total of 1309 adults who had smoked in the past year who responded to the surveys in the month following NSD (April 2007–2009) and a comparison group of 2672 adults who smoked in the past year who responded to the survey in the two adjacent months (March and May 2007–2009). 
Main outcome measures: The number of additional smokers who quit permanently in response to NSD was estimated from the survey results. The incremental cost-effectiveness ratio (ICER) was calculated by combining this estimate with established estimates of life years gained and the known costs of NSD. 
Results: The rate of quit attempts was 2.8 percentage points higher in the months following NSD (120/1309) compared with the adjacent months (170/2672; 95% CI 0.99% to 4.62%), leading to an estimated additional 0.07% of the 8.5 million smokers in England quitting permanently in response to NSD. The cost of NSD per smoker was £0.088. The discounted life years gained per smoker in the modal age group 35–44 years was 0.00107, resulting in an ICER of £82.24 (95% CI 49.7 to 231.6). ICER estimates for other age groups were similar. 
Conclusions: NSD emerges as an extremely cost-effective public health intervention.
Taking this from the top, we have to start by observing that they are claiming that about 10% of all smokers attempted to quit each month.  This indicates either some very faulty data or such an expansive definition of "quit attempt" (like "I woke up and decided I was going to quit, but I started again during my morning break") that it is meaningless.

Moving on, they assume that the entire observed difference is neither random nor the result of Easter/Lent, Passover, spring holidays, the misery of March, or anything else that might make April different from nearby months.  It seems like these might make a wee bit more difference than an arbitrary declared day that most people pay no attention to.  It is kind of interesting that they did not give us a month-by month breakdown, which we might have expected if the month with "No Smoking Day" were the global or even local minimum.

Beyond that, their interpretation of what NSD entails is quite silly.  They treat it as if it is some kind of medical intervention that is independent of other causes of quitting, but really it is (at most, even if it really works) a focusing event, causing people who are considering quitting soon to say "ok, I am going to do it that day".  So the effect, if there really is one, is to move quit attempts from May and June back to April.  Perhaps not of trivial import (remember that smoking for just a couple more months is as unhealthy as using smokeless tobacco for your entire life), but not the same as causing quitting that would not have otherwise happened.

And this says nothing of their their magical ability to detect permanent cessation from a cross-sectional survey.  Even if they have some standard prediction about permanence, quitting for different motivations, like say a focusing event, will inevitably have different permanency rates.

As if this were not bad enough, where they really jump the shark is the cost-effectiveness analysis.  Reporting the cost of a declared focusing day per smoker is LOL funny.  I wonder how much National Kale Week cost per meal at which kale was served; I will bet it was quite a bargain too. 

That "ICER" is the "incremental cost effectiveness ratio", which measures the cost-effectiveness of an intervention as compared to an alternative it could replace that is more cost effective (so a better deal) but less effective in total.  In other words it accomplishes an analysis like: "if we are going to take driver protections one step beyond seatbelts and add airbags, which are much more expensive but will save a few more people, we should make sure to not give airbags credit for the people that seatbelts alone would have saved anyway by comparing them to no restraints at all."  What makes this funny is that they pretend to be using a somewhat complicated good measure, one that is often not done creating erroneous results (e.g., airbags are measured against no restraints at all; pharmaceuticals are measured against placebos rather than existing effective treatments), to look at something that they got totally wrong.  In this case, the alternative that is crowded out by NSD (like airbags+seatbelts crowds out seatbelts-alone) is they same people quitting a bit later, which they completely ignore.  So what they claim is ICER is really just the most basic, and misleading, cost-effectiveness calculation that pretends nothing would motivate quitting were it not for NSD. 

So they go on to calculate the cost-effectiveness, not as £80/life-year-saved, but £82.24.  Even if their estimate of the effect of the intervention were as precise as the 2.8 percentage points they report (which is not even possible given that they are basing this on only a few hundred events) they could not get precision even to the first significant figure, let alone the fourth:  The guess – "estimate" would give it too much credit – about how many life years will be saved by someone quitting (again, even pretending that NSD caused it, and their estimate of permanent quits is based on anything, and that it would not have happened a month later even without NSD) requires assumptions about the next half-century of medical technology and other health effects.  It cannot be reasonably guessed-at within a factor of two, let alone to one part in 10,000 as they imply.  Someone develops a cure for cancer or emphysema, and the benefits plunge; some other breakthrough extends life by 100 years so long as you do not get cancer or emphysema and the benefits shoot up.

About the only thing that can really be said about their conclusion is that there is no doubt that NSD is more cost-effective than funding people who write articles for Tobacco Control.

Yes, this is what passes for science in anti-tobacco.  Is it any wonder that they can reconcile "hundreds of millions of people are not quitting" with "a tobacco free world by 2030" or whatever?  It would be humorous if it was not so incredibly damaging.  Oh let's be honest:  In spite of being incredibly damaging, it is frackin' hilarious.

Unhealthful News 179 - Getting molested by TSA is starting to look pretty good

Oh it is going to be fun to watch this one play out.

A few blogs have started putting out various versions of the story that the new x-ray body scanner airport security machines that are increasingly installed in major US airports (which the bloggers call "naked body scanners", so they are making clear their opinion about them) are causing an elevated risk of cancer for the Transportation Security Administration officials who are operating them, and that the claims that they were safe were actually based on approximately nothing.  The latter seems quite plausible.  I am a bit doubtful about the former, but it is an interesting story.

In any case, it is a fun day for the Freedom of Information Act, which revealed some of this information, as well as the source of my previous post from earlier today.

There are definitely grounds for suspicion about the whole program.  Some European airports have installed full body scanners that do not use ionizing radiation, and thus are undoubtedly safer, but there are rumors that the US is using an inferior product because the manufacturer is well connected.  I am not sure about that, and it did not show up in anything I saw today, but I am sure it will come back into the narrative, so I will leave it to those who are better at that kind of investigation.  Some of the points that came out this week were rather more interesting:  Far from saying the machines are perfectly safe, as the US authorities claimed they had said, various researchers made clear to the government that they could offer no such assurances and they even suggested that TSA workers should avoid standing near the machines which, of course, is safer than being the passenger standing inside them. 

The claims that the radiation dose is acceptable seemed to be based on the assumption that it was evenly divided throughout the body rather than being concentrated in the skin, creating a much higher local dosage that could cause skin cancer.  However, a lower dosage spread more evenly might create just as much cancer, just at low levels in each organ, and most skin cancers are not that threatening, so this might be a good thing.  It is quite a muddle.  But what is clear is that the information given the public was inaccurate.  It is also interesting that TSA agents asked for dosimeters (which measure cumulative radiation exposure and are common for lab workers, reactor workers, etc.) but were refused.

All of this information makes the government's behavior look very sketchy, and the charges against them are quite plausible.  Meanwhile, no plausible response to the charges is apparent.  From an epistemic perspective, the existing accusations and criticisms clearly put the burden of proof on those who are claiming safety, and they seem to have nothing.

On the other hand, the claim that will probably generate the most interest is pretty much groundless without further information.  TSA screeners in Boston have complained about elevated cancer rates, and the allegation is that TSA authorities have dismissed the claim and tried to cover it up.  This is a classic case of a cancer cluster, as I have written about in this serious previously.  It is always the case, due to the random distribution of disease occurrence, that some group of people somewhere has a very high rate of some cancer or other disease.  The fact that they self-identify does not tell us much because it is almost impossible to tell a chance cluster from one that is being caused by some exposure.  Indeed, we cannot even be sure that there is an elevated rate in this case, since there seem to be no numbers forthcoming.  Moreover, the claim is that there is also an elevated rate of stroke and heart disease, which are not plausibly related to the low-level radiation.  Finally, the machines have not been in place long, and almost all cancers take much longer than that to manifest; the basal cell carcinoma that has been proposed as the greatest risk typically appears more than a decade after the triggering exposure (usually a bad sunburn). 

So the Boston cluster probably signifies nothing, and the "cover up" – i.e., realizing that there is not a plausible causal relationship and not pursuing the matter – was a reasonable response. But in light of the real cover-ups that have occurred, and the general failure to understand the cancer cluster statistics problem, it could prove mighty embarrassing.

Oh, and for the record, a few days ago, before this story broke, I was directed to go through one of those scanners and refused, insisting on a manual search.  This was not because I was intrigued by the thought of being felt up, but because I never believed the claims about the safety of those machines.  Even though I am sure the risk is very small (and so I would not have done this if the cost to me were high, like I did not have a half hour to kill before my flight boarded anyway), it seems to me to be an obligatory act of civil disobedience, to demand the slow labor-intensive option as a protest against the dishonest way in which we have all been exposed to this bit of potentially dangerous security theater.





[Unrelated:  Those of you who are working on learning the nuances of epidemiology by reading this series might want to check out my ongoing comments at this post, which currently focus on distinguishing confounding from causal intermediaries, but are expanding into other points.]

Ellen J. Hahn does not libel Brad Rodu so much as expose the ignorance of anti-tobacco "researchers"

(A non-UN post)
Several people have sent me this FOIA-disclosed email, implicitly suggesting I write about it.  In it, University of Kentucky professor Ellen J Hahn writes about my friend Brad Rodu, a prof at rival University of Louisville.  The context was not reported, but the email seems to have been sent to some local health officials, and was presumably intended as an attack to try to prevent them from learning something (which was undoubtedly true) from Brad about harm reduction.  Hahn wrote:
Please note the [sic] Dr. Rodu is on the Board of Directors for US Tobacco and has funding from the smokeless tobacco industry.  He is a big supporter of use of smokeless tobacco to quit smoking.
Brad is quite proud of the latter characterization, and discloses his funding much more aggressively than anyone I know (and far far far more readily than the anti-tobacco people do).  But the first bit, about being on a board, has been characterized as libeling him.  The claim is false, but while I am no lawyer, I really do not think that merely giving someone credit for holding a position, grant, etc. that they do not have is libel.  If we call that, or the false statement that someone has funding from industry, libel per se then we are implying if it were true, it would be crime or at least a terrible embarrassment.  Neither of those is true.  Perhaps if someone was well-known for stating that he does not take industry funding, and then someone claimed otherwise, it would be an explicit accusation of lying, which could be considered libel.  But otherwise it is best not to fall into their trap of implying that corporate efforts to support honest research and THR are a bad thing.

On the other hand, Hahn probably thought such board membership was at least as bad as lying, so maybe its libel-ness should be judged by the standards of the writer.  For example, if someone claimed:
Ellen J. Hahn, a University of Kentucky professor who actively opposes tobacco harm reduction, received competitive grants from a pharmaceutical company that makes nicotine abstinence aids and a cigarette company that is not moving into the low risk tobacco business, both of whom stand to lose business if the public health benefits of THR are realized
she might consider it libel even though it is merely giving her credit she does not deserve.  Of course, I am not claiming that -- I have no idea if she has ever received such grants -- so, gee, I hope no one takes that paragraph out of context.  Just in case someone wants to, I had better rephrase it using a trick I learned from the tobacco control people:
It is worth considering the possibility that Ellen J. Hahn, a University of Kentucky professor who actively opposes tobacco harm reduction, has received funding from pharmaceutical companies that makes nicotine abstinence aids and cigarette companies that is not moving into the low risk tobacco business, both of whom stand to lose business if the public health benefits of THR are realized.  She certainly sounds like one of those people who is being paid to opposed legitimate public health efforts.

But I have not gotten the really important, and even somewhat funnier, bit.  I noted that Brad would never have bothered to deny he was on the board of "US Tobacco".  Among the very good reasons for that is that there never was a major company called "US Tobacco".  There was "United States Tobacco" until 2001, when they changed their name to "US Smokeless Tobacco".  Hahn's email was written in 2010, so even if she started writing about this back in the days of the old name (that she only got somewhat wrong) you would think she would have adjusted sometime in the 21st century.  This might seem like a petty distinction, but keep in mind that this was not an ongoing correspondence about a topic in which abbreviations might creep in.  It was a one-off statement of fact, intended as an accusation, and so obviously called for getting the statement just right.  Presumably, then, the error reflects the fact that she did not know what she was talking about.

The bigger problem, that proves she did not know anything about the topic is that UST (their stock ticker and typical shorthand for that company) was acquired by Altria and integrated into that company and put under its board of directors; this was completed in 2009, as anyone writing in 2010 would have known.  At least anyone with a modicum of knowledge about the world of tobacco would have known it, and Hahn signed her letter with "Director, Tobacco Policy Research Program" and "Director, Kentucky Center for Smoke-free Policy", so she seems to be claiming she is an expert in the matter.

So, Hahn did not commit libel.  She just committed the usual factual sloppiness that is common among the scientific "experts" involved with tobacco control when trying to further their cause, ignoring science and other sources of fact and disciplined reasoning in favor of just saying anything they think might help their case.  And furthermore, it is not even an exceptional case of that by the standards of the tobacco control "research" crowd.  It is important to make this distinction, because these people do commit libel sometimes, and we should reserve the term for those occasions so it does not get worn out.  As for accusing them of simply not knowing what they are talking about and saying anything they think will further their cause without regard to its accuracy, well, there is no way to avoid wearing those out.

Unhealthful News 178 - Why is it never a subsidy for healthy behavior?

The battles over financially penalizing people for unhealthy behaviors seem to be increasing well beyond historical levels.  After the triumphs there in the area of tobacco, which continue to amplify also, the battle has ramped up to punish people for soda, alcohol, and other unhealthy foods, and even to impose fines on recipients of public health care if they are overweight, and like tobacco, this has begun to include private lawsuits.  (Important note to those of you reading too fast:  Notice that the antecedent was punishing people for their behaviors, and thus "triumphs" refers to increases in punishing people, a triumph in the minds of people who think that is a good thing.  I am most certainly not suggesting these are triumphs from the perspective of public health or any other humanitarian interest.)

Last week a data mining exercise revealed that consumption of fried potato products independently predicts more weight gain than does soda consumption.  (Notice the phrasing there:  Those foods are predictors of weight gain, not necessarily the cause, even though some other exposures are "controlled for" to some extent.  Also, the simplistic analysis does provide useful quantification even though quantities were reported in the press by people who did not even understand that they do not understand what the study results meant.)  In response, the LA Times half-joked about imposing a potato tax, to mirror the proposed soda taxes.  It is pretty clear that if one is justified, then so is the other – both of these foods are a combination of energy (calories) and entertainment, with little other benefit.  A parent would be wise to discourage young children from consuming them, as yummy as they are.

But what should we think about the government doing so?

Much has been said, but something occurred to me that is absent from that debate:  No one ever suggests making such taxes revenue neutral, and there is almost no talk of using subsidies instead.  There are plenty of arguments to be made that government has no business doing this at all, of course.  But to test the honesty of the claims made in support of the taxes – "it is good for society", "it will save healthcare money" – it would be interesting to see if supporters were so excited if these proposals were not profitable.

Some would still be adamant supporters, of course.  This includes some people who genuinely care about people, understand welfare economics, and genuinely believe that people make non-welfare-maximizing decisions – i.e., not in their own best interests, based on their own preferences – and so want to assist them in making better (as defined by their own preferences) choices.  But this is a tiny minority of the supporters.  Most government officials who buy into these schemes seem to be most interested in the promise of more money.  Would they be on-board if it were just about supposedly helping people?  And of course, there are the "health promotionistas" who believe they know best about what is best for people, and are willing to punish the people into conforming to their god-given knowledge.  Putting them to the test is a little more difficult, but it could be done.

To take away the government coffers incentive, it is just necessary to make the policy spend at least as much as it takes in.  It may not be trivial to figure out ways to spend money in the same direction as the taxes since, for example, we cannot subsidize tap water, already approximately free, as an alternative to penalizing soda.  But the taxes are not exactly trivial either, so it is not all that much harder.  (E.g., for a soda tax there are dozens unintended consequences and major complications like:  Are they really going to ban free refills, and if not, how can they tax by unit volume? Are they going to forbid raising the prices of all non-taxed drinks to keep them all the same, as most every restaurant will prefer to do?  Will clerks be required to police self-serve soda fountains to make sure someone is not evading the tax by lying about their soda being diet?)

Figuring out how to offer a subsidy for healthy foods might have to be a bit oblique compared to the soda or potato tax, like requiring that all of the tax revenue be given to grocery stores in proportion to how much broccoli they sell.  But the main point is to ensure that those pushing for the taxes are all really motivated by the incentive effects of the price increase and not just wanting to skim some money for themselves.  You have to figure that this is the real motive of governments; supporters in New York practically said as much.  So let them prove they really are supporting these proposals for their public health benefits.  Indeed, governments should even be required to kick in a bit more, to lose a bit of money from the policy, something they should be happy to do if they are really motivated by the wondrous predicted healthcare savings. 

This has the added benefit that government will not become dependent on the revenues, like they have with cigarettes.  Efforts to prevent tobacco harm reduction are inspired, in no small part, by the governments and activists who do not want to lose their cigarette tax gravy train.  Yes, the broccoli lobby might try to keep people drinking soda to keep up their subsidy flowng, but if we spread the subsidy widely enough, no one will be in a position to want to make sure the behavior does not abate because they are now profiting from that market.

That brings us to the outside activists.  You can be sure that anti-obesity (etc.) busybodies and their pet researchers are drooling at the prospect of getting their very own Legacy-like boondoggle of money.  It will not be as big as the cigarette tax payoff, but it will still support all manner of useless activity for many useless people for the rest of their careers.  So we should absolutely take it away from them by requiring the revenue be spent in a way that does not benefit them.  In fact, we can make them pay for this.  If they are so excited about incentives,  let them experience some.  The rule could be that major policy initiative that is onerous to people, like a soda tax, but will supposedly will reduce the health budget by x% should come with an automatic x% reduction in population research, advocacy, education, etc. funds for health promotion.  After all, with that much of the problem solved, not as much money will be needed.  Perhaps that would encourage a bit more honesty in these over-the-top predictions that never come true (they are generally off by approximately 90% of the predicted benefits), as well as some priority settings.

After all, we cannot let these people continue to gorge themselves on whatever they want without facing some repercussions.  The expenses they impose on the rest of us are simply unfair.

Unhealthful News 177 - Prevention is better than cure, but preventive measures are often not

I had planned to cover the news today, but I had one more overview thought I wanted to communicate.  I have written some of this before, but I have a few new thoughts that I think are interesting.  I will repeat enough to make it unnecessary to go back and read anything. 

When major failures occur in a normally functional system, it is usually due to a combination of three causes, not just one as we often try to simplify it to:
  • operator error (e.g., pilot error; occasionally a failure would occur no matter what the proximate actor did, such as when the hardware just breaks, but usually some specific goof triggers it – otherwise it would probably have already happened)
  • hardware inadequacy (not necessarily that something broke, though that might be involved, but that it could have been designed to avoid the failure in question if that had been the goal)
  • systems failure (rules or patterns of behavior made the operator error more likely and did not guard against or mitigate the effects of this particular failure mode; e.g., making it easy to push the wrong button, not inspecting the tires often enough, not building in redundancies)
I commented yesterday about a novel that included the U.S. CDC fighting an all-threatening disease outbreak that required all of their abilities and major police powers, contrasting that with the current government practice of nibbling away at freedoms and pleasures to provide trivial health benefits at great psychological cost.  "CDC" is, of course, the now inadequate abbreviation for the Centers for Disease Control and Prevention.  And who can complain about the fact that they or any other government entity have prevention as part of their mission?  How can prevention be bad?

It is bad when it lets a certain ilk of people try to make everything about operator error.

An illustrative example of focusing on the operator that you have probably seen is the card reader or gas pump that has multiple layers of hand-scrawled signs telling people which button to push or direction to slide, accompanied by clerks who get annoyed a few dozen times a day when they have to point out the sign to those who do not notice it and consistently guess wrong about what to do.  This is a case of the operator being blamed for, and being forced to compensate for, hardware and systems failure:  a device that was designed in a way that sufficiently defies expectations that many people's guess about how to use it, coupled with no better system for encouraging correct usage than yelling at the operator.  Another favorite are the hotel key cards where the clerk repeatedly warns you to keep them away from your phone and wallet or they will demagnetize, which inevitably happens.  These problems do not merely offend the engineer in me, but cause needless cost.  They call for using better hardware (it exists in all of these case) or figuring out a way to gently guide people to avoid the problem. 

Such solutions tend to be noncontroversial.  No one complains about health researchers when they figure out a hardware fix to a problem, like a new drug.  No one should complain (other than, perhaps, about the cost) when government improves infrastructure to give people the opportunity to behave in a healthier way, such as by installing bike paths, making sure that urban bodegas sell fruits rather than just chips, or requiring that restaurants list calorie counts.  (Occasionally someone complains about such actions but they generally lack legitimate grounds to do so.  Perhaps the predicted benefit does not seem to outweigh the cost, though seldom is that the justification.)

But "...and Prevention" becomes a problem when it consists of harming people in an effort to force them to change their behavior.  That is, they put the onus on the operator.  This is a remarkable combination of bad judgment and bad ethics.  The ethical arguments, both libertarian and cost-benefit based, have been made here and elsewhere to an extent I see no reason to repeat them.  The more practical argument is that in most arenas, the people running a complicated human system recognize that they must look for ways to improve the system, since bludgeoning people into being better operators is usually pretty useless.  Pilots are not trying to crash after all.  And if all of your troops are miserable to the point that they are not functioning well, you can try to whip each of them until they perform better, but there are much better solutions.

If your students are doing badly, you can admonish them to study harder, but if it keeps happening year after year, there is probably something wrong with the teaching, or the motivation, or the community, or something else beyond the individual.  Yes, each student could save his own life, but if so many of them are failing to do so, we obviously need systems fixes, just like with a card reader where the users insert the card the wrong way most of the time.  Imagine a task force assigned the job of improving the performance of schools proposing the policy, "ban televisions and video games, and institute corporal punishment and public humiliation for bad grades, and make sure there is no safety net for students who cannot get through school to make sure they have an incentive to do better."  But that is basically what the "health promotion" people propose when tasked with disease prevention regarding drugs, diet, and exercise.  They sometimes talk about making healthier communities, but a close look reveals that they are often just demanding that each individual behave differently.

When reviewing applications to public health school from the hundreds of indistinguishable Indian applicants trying to get into to American schools (usually with the intention of getting a foot in the door so they can become American physicians), their nearly identical application essays included the phrase "prevention is better than cure" in the opening paragraph.  That sounds fine to a public health person, notwithstanding the pathetic repetition, until you notice that it is not always true.  Preventing a particular case of a serious disease is almost always better than letting it happen, but that is not how things work.  We cannot go back and prevent a particular event.  We can only take prevention measures.  Some of those are justified and efficient: clean water is definitely better than treating cholera, and the right vaccines and industrial regulations are appropriate and worthwhile. 

But in keeping with my observations from yesterday, others represent cases of mistaking a prevention measure for an act of prevention.  Just because it would almost certainly be a good thing to prevent a fatal smoking-caused cancer or accident does not mean that everything that might theoretically protect someone is a good policy measure.  Preventive measures have broad negative effects and may not save anyone.  I doubt that any honest intelligent person really thinks that emotionally violent pictures on cigarette packs are really going to cause many people to not smoke.  But there is strong support for it because people mistake a preventive measure that seems like it might do something for a way of preventing a particular outcome.  But it does not work that way.  Whoever it was in someone's life who was suffered or died because of a risky behavior is not going to be retroactively helped.  Preventing that would have been good, and curing it also, but that is not necessarily true for a prevention measure aimed vaguely in a direction that might have prevented that case.  And implementing public policy rules is not a healthy form of therapy.

Unhealthful News 176 - But if you try sometimes, you just might find

Last night, I finished reading Mira Grant's novel, Feed.  It is about bloggers and a future filled with zombies.  I trust that is enough information to polarize my readers into those who will absolutely not consider reading it and those who are intrigued.  For the latter, I recommend it as entertaining moderately-light reading with some good deeper messages.  (very minor spoiler alert)  In it, the CDC has become a latter day military + police + homeland security for dealing with the biological threat of zombie virus infection that puts everyone at dire risk.  I think one of the reasons I liked the book is that it is so refreshing to think of government public health people fighting a genuine major health threat rather than fiddling with soda, salt, and e-cigarettes.  Of course, there are hints that they might be part of a power-politics conspiracy, and the author somehow randomly puts them in conspiracy with the tobacco companies, which is quite strange because she has constructed a future where people are protected from cancer and so smoking has become a popular and comparatively non-harmful activity.  I guess she just has some personal pique about that one.

Anyway, perhaps we have people who are just wired to fret about risks and hazards and to try to do something about them, and this urge is not based on the actual magnitude of hazards that are faced.  In a previous era they just would have inflicted that neurosis on their own kids, but now they have found ways to infantilize entire nations.  In the zombie-filled future, they will have something useful to do.  But right now they are like a large politically-powerful standing army during peacetime, an institution that tends to create the urge to fight pointless wars.

Today I went to the funeral for an old friend/classmate/roommate (so someone who was only my age).  People cannot always be protected from the things they choose to do for fun.  I have not changed my mind about that, even though we certainly see how sometimes a single event triggers some people – those with that wiring I mentioned – to direct their crusade in a particular direction.  But most of the time that effort is something that can only make life worse, on average, and usually consists of waging war on some vice that would have turned out to have no effect, in this case.

Today, catching up on the lives people I had not seen or heard about in a while, I became starkly aware of how psychological health matters so much more than longevity by almost any measure.   My friend had more total happiness in his foreshortened life than most people could ever hope to, but I was reminded of how many people do not manage to have much.  In some never-turned-off circuit in my brain, it redoubled my disgust with what passes for do-gooder public health these days, actions that increasingly threaten to worsen people's psychological states for comparatively trivial physical benefits. 

It also occurred to me that he probably would have been a great asset when the zombie war started.


P.S.  This actually does make sense together, at least in my head.  Also, just for today, I am closing the comments.

Unhealthful News 175 - You cannot avoid mosquitoes

Pretty insightful title, huh?

Those of us who write about public health with a libertarian bent (or what I prefer to call a humanitarian bent – we treat people as people, rather than mere biological processes, and care about what they care about) often point out that the behavioral preventive measures that the "health promotion" types favor are often not worth the costs.  But sometimes they are not even preventive.

I was just noticing the usual annual flurry of news stories about mosquitos and west nile virus (no link – I am not basing this on any particular story or claim).  Many of them offer suggestions about avoiding being exposed (lots of repellant, staying indoors at certain times of day), usually in conjunction with a warning about WNV being detected in mosquitoes in the local media market.  Some stories are about government efforts to eradicate the exposure vector with over-the-top chemical attacks on the local mosquitoes.  The problem is that it is probably impossible to avoid exposure to WNV.  Maybe I am thinking about this because I was just in northern Alberta feeding approximately 500 mosquitoes per day.  No WNV there, but if there was, I would definitely be exposed.  And during my time being outdoorsy in Texas, Minnesota, and other places, I have no doubt I have been exposed.  And so have most of you who have lived in places where it is endemic. 

At least that is my guess.  The last time I followed the science on this topic was about five years ago, and at that time no one had a good idea of what portion of the population in areas with WNV had antibodies (which proves exposure, though people without detectable antibodies may have been exposed too).  Maybe knowledge has improved since then, but I have not heard anything.  Anyway, the point is that it is a better assumption that everyone in areas with lots of WNV mosquitoes is eventually exposed than to make the usual assumption implicit in the news reports (and the public policy).

What is that assumption?  That only those people with diagnosed cases of west nile disease have been exposed.  This is absurd, but it is remarkably common among those people who are supposedly expert on infectious disease.  Think about numbers you hear, like 75% mortality rates for some exotic new disease.  What that means is that if you look at only the people who show up at a hospital because they are in critical distress from SARS, bird flu, etc., lots of them die.  Not exactly the same as what is being claimed, since thousands or millions may have gotten the infection but showed little or no symptoms.

So the precautions against WNV assume that most people who have never suffered a disease case from the virus must have not been exposed, so they should protect themselves (or the government should protect them) from exposure.  But, again, there is absolutely no basis for this assumption.  Frankly, it is kind of crazy.  It is pretty clear that, among people in endemic areas, what keeps someone from becoming a disease case is much more likely to be genetics or other personal characteristics, not the absence of exposure to the virus.

This is a case where you might choose to let the press off the hook (though it would be good if they were better at what they do) because they are being fed a steady stream of nonsense from the supposed experts.  You can avoid getting HIV by practicing safe behavior.  But if a substantial fraction of mosquitoes where you live carry WNV, it will find you (unless you can get them all to use condoms).  And as long as the experts keep up the fiction that avoiding exposure is the best way to prevent disease, we are unlikely to find a solution for the majority of people who are not willing to seal themselves up from bugs for months every year.  This naive fixation is annoying and absurd, and yet strangely true to form for official "public health" people.

Unhealthful News 174 - Many results in journals are wrong, and that's ok

I received the following observation via twitter.  It looks like it had been RTed and perhaps MTed a few times, so I am not sure who first said it, and it does not matter.  I also do not know whether it was facetious or serious.  The suggestion was:
 Maybe we shouldn't publish until results are replicated
The accompanying link was to a case of a researcher who fabricated data and the published papers have been withdrawn.

I can understand the frustration that causes a lot of non-scientists to complain that too much gets published that turns out to not be true.  Rarely is this because of something as blatant as fabricating data, though it is quite often for actions that ought to be considered similar, like researchers intentionally fishing for a model that gets the result they want and hiding that fact.  Sometimes, though, it is the completely honest and proper error that comes from random sampling or an unrecognized flaw in the study.  Readers want simple right answers, like they learned in high school science classes.  But science in action does not work that way.

Probably some of you immediately thought of the point I wanted to make about the above quote:  How, exactly, can anyone replicate something that has never been published?  They might stumble on the same analysis and do it themselves, but even then they would not know they had replicated something.  Why?  Because the way researchers let other researchers know that they have seen something that might be worth replicating is to publish it.  Scientific publication is primarily designed to be communication among scientific experts.

The problem is, then, when research results that, to non-experts, are barely more understandable than raw data are communicated to people who do not understand what they are:  They are just one cut at a question and might be different from the future received wisdom, or even the existing received wisdom.  And this even ignores the problem of readers not knowing how to interpret the results in a useful way, even apart from what other studies might show.

Everyone in sight is guilty here.  Researchers (and their institutions, and the journals) tout results to the press even when no one other than a few experts can really make sense or use of them.  Most health reporters do not know enough to critically assess results or put them in context, except in the way they cover politics (which is to say, treating it like a "he said, she said" game and finding someone to just assert that the result is wrong).  Science teachers create the mistaken impression that scientific results are always Truth.  And the public demands simpler answers than can really exist (though perhaps all the blame there lies with the others on the list).

As you can surmise from my writings, I think the crux of the problem is science reporters who simply do not understand science, and health researchers who ...well… simply do not understand science.  But no one is going to get those groups to refrain from publishing until they know what they are doing.  And demanding replication of the errors will not help any -- there is no shortage of that.

Unhealthful News 173 - Believable information about nearsightedness (and why)

I need a little positivity in my life today, so I am going to write it.  Perhaps the most interesting health news story of the week was not in the news, but was an op-ed in the New York Times that argued that spending time out in the sun is the way to prevent a child from becoming nearsighted.  The authors make a good case and it seems convincing.  It is kind of interesting why it is convincing.

The authors clearly demonstrate that they are engaged in good scientific reasoning.  This is not a case of a reporter blindly transcribing something he does not understand.  (I should also mention that the authors are credentialed experts, but frankly that does not impress me.  People with those credentials write a lot of garbage too.)  They start with the observation that nearsightedness has increased dramatically in Americans over 40 years.  But particularly insightful is that they observe that there is a strong genetic component, which is pretty much common knowledge, but allude to the fact that any highly nearsighted prehistoric ancestors would have been selected out of the gene pool.  Thus, our ancestors must not have been nearsighted and so there is an environmental cause alongside the genetic cause (my words, not theirs, but they make the point precisely without the jargon).

They then point out a couple of studies that support the "playing outdoors a lot as a kid protects against nearsightedness" hypothesis.  In a typical health news story, this is all you would see.  You would then be left wondering if these studies really represented the most convincing body of evidence, or if the authors just like their results.  It is still possible that these authors are pushing a pet claim that is really not so well supported.  I know little about the subject, so could not judge. 

So why do I trust them?  Well, I understand evolutionary biology and gene-environment interactions in the abstract, and they explain those parts of the story correctly, with a precision that comes from simplifying without dumbing down.  The do not explicitly point this out, but their explanation can explain the quantity of the effect that has been seen (because the changes in being outdoors are that great), unlike many such stories where something causes a large percentage increase in risk but still only accounts for a small fraction of the total.  They also respond to the common belief and most obvious alternative hypothesis, that staring at books and screens caused the problem.  The respond with a mere assertion that this is not true, which leaves the reader a bit dissatisfied.  We can hope that if they could have afforded another few hundred words they would have explained the claim a bit.  But the mere fact that they recognize what most people would think when told "nearsightedness has an environmental cause and is increasing in Americans", and they bring it up themselves, is a good sign.  Acknowledging the best alternative hypothesis to their own does not prove they are credible, of course, but the typical practice – failing to even mention it, hoping readers will not think of it – would prove they are not.

On the critical side, they write the phrase "four times less likely".  You know what they mean, but if you think about it, that phrase really does not work.  It does not really hurt their scientific credibility.

So, good news for my baby, who may avoid sharing my experience of having to memorize where the soap before getting into the shower (no glasses) because I cannot see it.  And it is good to see health science writing that inspires confidence, and to be able to sort out why that is.

Unhealthful News 172 - Reviews of expert analyses are not better than expert analyses

Ben Goldacre is a blogger/columnist for the Guardian, covering much of the same ground as Unhealthful News.  He writes some interesting stuff, and often makes a point I overlooked when writing about the same topic, though I often disagree with some of his points, usually because he falls into a trap of incorrect conventional wisdom about what something mean.  Recently, he posted about a research paper that he and colleagues wrote to address the question of how often health claims in newspapers are wrong.

Ironically, though, the study has at least one serious weakness.

He reports:
Here's what we found: 111 health claims [about food that could be interpreted as advice] were made in [the 10 leading] UK newspapers over one week. The vast majority of these claims were only supported by evidence categorised as "insufficient" (62% under the WCRF system). After that, 10% were "possible", 12% were "probable", and in only 15% was the evidence "convincing". Fewer low quality claims ("insufficient" or "possible") were made in broadsheet newspapers, but there wasn't much in it.

Sounds impressive, until you ask "what could that possible mean?" (remember to always ask that!)  He does actually explain much better than news stories usually do, and the explanation points out a certain contradiction in the reasoning.

I have a minor quibble with the characterization of the target population of stories.  I think it is a bit misleading to claim that you can clearly define such statements, separating them cleanly from statements about food that are so obvious or obscure that they do not count as advice.  But so long as they had a clear idea of what they were looking for, and worked hard to avoid choosing to include something because it made their results so impressive, then that is fine.  A category can be systematic without being a clear epistemic object.  Another minor quibble is their choice to take every paper for a single week, rather than gathering the same number of each newspaper from across a wider time period.  This would help reduce random sampling error since health news stories tend to cluster.

The important concern, however, is how they decided what category to put something into:
a heroic medical student called Ben Cooper completed this epic task, researching the evidence behind every claim using the best currently available evidence on PubMed, the searchable archive of academic papers, and current systematic reviews on the relationships between food and health.
But this depends on the published literature, as interpreted by someone who is semi-expert, representing the best expert knowledge on the subject.  It is remarkable how often that is not the case.  I can think of numerous examples where someone reviewing the literature would come away with a conclusion that is very different from that of genuine experts.  To name just three examples I have worked on that I have written about here and that come immediately to mind, someone naively reviewing the literature is likely to conclude: harm reduction using smokeless tobacco is not proven to be beneficial, H.pylori infections never go away without treatment, and routine screening mammograms at age 45 are a good idea. 

In fairness to Goldacre et al., none of these are dietary choice, which tends to involve rather simpler claims and where most of what is claimed is based on one data fishing study (and thus is not well supported).  So they kind of wired the result by choosing that particular topic.  But there are still specific subject experts who know more than a simple literature review could tell you and cases where they recognize something is true though the literature has not caught up.  Sometimes they are the ones making the statements to the press that get reported but do not appear to be supported by the literature, based on a naive reading.  Moreover, the researchers who wrote those papers that Cooper reviewed are often the ones making claims to the press that are fodder for Goldacre's criticism.

There is no easy answer here.  You have to figure out who to trust, and you cannot trust that the literature is accurate if you are not going to trust the authors of that literature.  But if you are going to trust the literature and are really trying to figure out if a claim is supported, it is probably worth asking a few of the people you are trusting as experts for their opinion.  Many systematic review papers are synthetic meta-analyses, which I have pointed out are highly flawed.  But the others, that do not blindly follow a bad recipe, are heavily reliant on the expertise of their author, in both the subject matter and scientific epistemology, and there is no rule that prevents someone who is far from a top expert from writing the review (indeed, it is far more common than not).  Many reviews just take sketchy information and repackage it so that it looks authoritative.  Is this review of 111 claims such a case?  It seems even harder to do this well.

Unhealthful News 171 - What the U.S. government is doing for our health

Today in my twitter feed:

1. The FDA tobacco unit (@FDATobacco) sent out multiple tweets, seemingly once an hour, talking about how they are going to have an exciting announcement about new cigarette package rules tomorrow.  Maybe I am reading too much into 140 characters, but they seem downright giddy about it.  Whether you like the policy or not, there is something unseemly about the glee.

2. A story tweeted by @taz3cat reports that a man robbed a bank, demanding just $1, so that it would go to prison and thereby get health care.

3. Also, a reminder that thanks to the lack of effective stimulus effort, we are headed for a lost decade (high unemployment, people permanently lost from the world of productive pursuits, etc.), which will ensure that more people will need to rob banks to get health care.

But at least cigarette packages will be uglier (I assume).  Maybe that will solve everything.

That is all.  Sorry – I've had a very bad day.

Unhealthful News 170 - Followup on the benefits of smoking

This is the first of what might be some posts with thin and/or recycled content.

Yesterday I finished the third of a series of posts examining a study that supposedly estimated the full costs (including health, etc.) of a pack of cigarettes.  I used this to explain the concept of putting dollar values on the "invaluable", as well as the limits of doing so, and some other principles of cost-benefit analysis.  I concluded with the observation that it is almost impossible to justify doing an analysis that counts up how a behavior takes away life's benefits (by costing life years) without including how it create benefits too (by improving their mental health, functioning, and happiness).

The "almost" in the last sentence refers to the one justification I can think of, if you are using the rest of that calculation to figure out the minimum benefit someone must be getting from the behavior.  That is, if the calculation shows that the total cost someone pays per pack of cigarettes, netting out everything other than the day-to-day benefits of use, then those benefits must be greater than the caluculated cost.  So since the study put that cost at $40/pack, the benefit to a smoker must be at least that.  In the comments yesterday, Chris Snowdon picked up on this theme, and I wanted to expand upon what he wrote.

He noted,
smokers tell us how much they value smoking by how much they spend on the habit
in the form of what they pay for a pack of cigarettes (purchase price, including taxes).  He points out that the benefit must be this and then some.  In economist speak, we have a revealed preference, the gold standard in consumer economic valuation, because consumers show us that the value must be at least what they are paying or they would not do it.  I will offer a friendly amendment/clarification to what Snowdon wrote:  The purchase price is that absolute minimum floor value for this, because smokers will also consider the anticipated health costs, as well as any costs from social scorn, etc.  Even if someone makes the utterly absurd claim that smokers are oblivious to the health effects, the absolute minimum benefits must still exceed the purchase price.

It is certainly possible to make some arguments about the nuances, but the point is that such arguments have to be made.  If someone is going to claim that, unlike every other consumer good, where revealed preference is considered gold and we trust people to show a tendency toward rationality and have common knowledge, smoking is different, they face a rather steep burden of proof.  But in the dominant discourse about tobacco use, such arguments not even asserted, let alone established.  And no, simply saying "cigarettes are different" is not the statement of an argument – a wee bit more detail is needed.

For example, it is possible to argue that smokers discount the future so heavily that they effectively ignore the health costs.  It seems likely that they irrationally undervalue the future to some extent, since most people discount the future too heavily about everything, but quantification is needed.  Those who want to make this argument are obliged to recognize that smokers do consider some of the health costs, and some quantification is needed.  With that, someone could claim that the revealed preference floor is merely the purchase price plus that fraction of the health costs, and not the true total cost.  Of course they do not do this because, as Snowdon put it,
anti-smokers heads would explode if they tried to come to terms with smoking being pleasurable or having benefits to the user....
But this refusal to acknowledge simple bits of reality means that the only two numbers on the table for the floor revealed preference (minimum benefits): the full cost of smoking and zero.  And zero is obviously wrong.

There are other arguments to be made.  It is possible to argue that smokers feel that the benefits do not outweigh the costs in the long run, but the short term difficulty of quitting is beyond what they are capable of enduring.  This seems to be what most people have in mind when they talk about addiction (except the ones who just use the word to mean "use", and the tiny odd minority who use the word to mean some biochemical process).  This is theoretically possible, but it describes a very extreme situation.  The obvious evidence – the fact that so many people quit when they really want to, the fact that many people get over the short term hump and then start again – argues against this.  So if someone wants to craft this argument, they need a lot more than vague assertion to back it up.  The closest anyone seems to come to actually trying to argue the point is claiming that most smokers say they want to quit, or try to quit, but it is pretty clear that this is about second order preferences or is just cheap talk, and is obviously not enough to establish what should be seen as a rather extreme claim about deviation from rationality.

Bottom line:  Rhetoric can convince causal readers and impressionable children that smoking is somehow such an extremely odd experience that it defies the rules that describe 99.99% of consumption choices, that it challenges rational analysis as much as North Korean indoctrination or facing the decision to cut your own arm off to escape the wilderness, or at least as much as methamphetamine.  But scientists and other honest serious people should know that this requires some extreme evidence.  That does not merely mean showing that there is some deviation from perfect rationality, because that is true for most every decision – the argument being made is not that there is a deviation from perfect rationality, but that even the most basic rules of rational behavior do not apply.  So,

(a) Extraordinary claims require extraordinary evidence.  Hypothesizing extraordinary claims without having the extraordinary evidence is acceptable as long as someone admits they are doing this.  But never trust anyone who want you to believe that an extraordinary claim is self-evident.

(b) Absent compelling evidence to support the extraordinary claims, we have to recognize that a group of health economists have argued that a pack of cigarettes provides over $40 worth of benefit, or at least more than a large fraction of this if we allow for deviations from perfect rationality.

Unhealthful News 169 - Flawed cost of smoking calculation, part 3: treating people like crops

In two previous posts I commented on a calculation that purported to show that the total cost of a pack of cigarettes, considering the costs of everything including the material, medical, disability, and premature death, is about $40.  Some people might claim that it is impossible to put some of those in terms of dollars, but as I explained in those and other posts, it is possible and necessary to do.  It is implicitly done every time we make a policy decision that trades off market-traded resources (or, put another way, money) against health and longevity.  However, I argued, that particular calculation was wrongheaded, but offered some useful lessons for putting dollar numbers to those goods and some legitimate criticisms about doing so.

I pointed out that about half of the $40 in that calculation consists of the smoker's own value for living longer and most of the rest is the cost of the cigarettes and foregone future income.  All the rest, the external costs, is covered by excise taxes, again paid by the smoker.  So this is basically a market decision, which makes this kind of calculation – based on a lot of shaky assumptions about assigning numbers to the "invaluable" – inappropriate.  Such calculations are needed when we do not have markets, like for infrastructure or regulations, but not for consumer market choices where the externalities are minimal or paid for.  Since these calculations are a basically a very rough, kludged substitute for a real market, it makes no sense to use them if the market is functional.  As I explained in the first of my posts, the numbers for the "invaluable" do not really have a meaning apart from their role as a substitute for a market, and so the calculation represents the mistake of treating the numbers for having existential meaning.

In the second post I pointed out some apparent flaws in the calculation, including some apparent double counting.  Most of that came from including a "value" for a person's own value for losing a life-year of $100,000, and then adding in secondary effects of that, like lost income.  As I pointed out, when using a number like that, it is a practical necessity to include the secondary, tertiary, etc. effects.  So the numbers we use are a bundle of positives (enjoying life, producing, being there for your family) and negatives (consuming resources) rolled into a single number because it is impossible to sort them all out.  So taking such a number and then adding back in some, but not all, of the secondary effects makes no sense; it would have been fine to pick a bigger number since that one is on the low side, but the partial adding up is rather a mess.  For example, it makes no sense to count someone's lost income as a cost without counting their foregone consumption as a benefit.

That leads to the promised biggest problem with the whole exercise, failure to use a consistent level of analysis.  An ideal cost-benefit analysis, assuming a CBA is appropriate at all, includes all costs and benefits: expenditures, opportunity costs, health costs, mortality costs, time spent, pain, pleasures, etc.  Many partial analyses do not look at all of this. 

Some look only at impacts on government or other budgets.  These are fine for what they are, but are often mistaken for having greater meaning.  For example, it is reasonable to figure out how much, on net, it costs a health system to pay for a cancer screening test, which costs something up front and causes treatment, but occasionally averts greater treatment costs.  But this is often badly misinterpreted, such as if the result shows that there is a net increase in expenditure and someone says "it is therefore not worth doing".  But the calculation did not consider how many people's lives were saved, and other health outcomes.  The expenditure calculation might come out on the negative side, but for most medical interventions the resource cost (money spent) is a negative consequence that is justified by the health benefits.  So these "budget-based CBAs", which are not really CBAs at all, are an ironic construct wherein the CBA ignores most of the B, and health economics (the name of the subdiscipline that focuses on such calculations) ignores health.

Another place where we sometimes draw the line is in terms of market resources, ignoring health and happiness.  Again, this should not be mistaken for a full CBA.  Only a communist or fascist government, or a very impoverished society that is desperate about survival, would want to make public health decisions based only on consumption and production.  That is how we decide about spending money to make crops healthier, where we do not care about the crops' for their own sake.  Still, the results might aid thinking about a choice, so long as they are not over-interpreted.

One place that is tempting to draw the line, but that does not work, is between costs and benefits.  It may seem that effects of a policy are pretty easy to classify intuitively, and some are.  But since costs and benefits are just the same thing with or without a minus sign in front, the distinction is technically arbitrary and makes a mess in many cases.  An obvious example in the cigarette calculation is characterizing the exercise as counting up the costs, but still subtracting foregone pension payments from the sum.  This is the right way to do a CBA – it certainly makes no sense to include some of society's expenditures on a person (extra medical consumption) but not the equivalent offset (less other consumption). 

Aside:  This is something that seems to elude those who believe that demands like the one behind the Master Settlement Agreement are reasonable.  The claim is that smoking costs the government money because it increases someone's medical costs.  This ignores the fact that smoking contributes quite a bit more money to the government in the form of taxes and foregone retirement benefits, so there is no legitimate complaint about "costing society money".  The only way someone gets to that conclusion is by arbitrarily ignoring some of the effects of one type while counting others.  That makes no more sense as counting the just the costs of taking a new job ("I will lose the $60,000/year I make from the other job and have to commute ten miles to work") rather than netting out each category ("I will make $10,000/year more, and my commute will be shorter").  That analogy may actually be charitable, and perhaps it is more like a shopkeeper complaining about having customers because all of them walk away with some of the inventory (which obviously ignores the fact that they leave money in exchange).

This brings us to the fatal flaw in that analysis of the full cost of a pack of cigarettes.  As I noted, they properly netted out some of the benefits, like subtracting pension savings.  But they ignored others, a smaller example being inclusion of lost productivity from smokers (from smoke breaks and sick leave) but not the offsetting increased productivity (by making some people more functional).  The big and fatal example is the authors including the loss of the value of a life year to the smoker himself, but not the offsetting benefits of making the years that are lived better.  If the analysis is going to go beyond budgets and include what people care about, it has to include everything they care about.  Cherrypicking some preferences does not lead to a legitimate analysis.  This is well established in the field.  For example, analyses like this are used to assess the net value of a treatment (say, cancer chemotherapy) that increases longevity but imposts other non-market costs, like making someone feel terrible for their remaining months.  While some such decisions can and should be made based on an individual's personal tradeoffs, sometimes a policy decision needs to be made, and anyone who knows how to do this right knows we must put numbers to all of these considerations.  What the authors have done is arguably even worse than treating people like crops, because instead of cleaning treating people only as producing and consuming engines, they arbitrarily count some of what people care about but not all of it.

There actually is one reason you might want to calculate everything paid by the smoker (which they basically do since taxes pay for what would be externalities), leaving out the benefits of smoking:  It allows us to estimate the minimum value that smoking must have for the smoker herself.  If she is willing to suffer $40/pack in costs, then the benefits must be greater than that.  Of course, someone might then argue that the true minimum must really be lower than that because people do not understand how harmful smoking is (though this is pretty clearly false in educated societies) or that people discount the future too heavily (which is probably true, but requires some arguments and calculations to quantify – it is not good enough to just observe it is true).  Or perhaps there is just some difficult hump to get over to quit that people cannot handle, even though they have decided the benefits are not as great as the costs (which is probably what most people mean when they say "addiction", notwithstanding the attempts to misdefine it in terms of biology).

The generalization of this point is something that I do not have to explain to anyone who has thought about nanny state issues:  The "health promotion" types (the extremist loose-cannon storm-trooper wing that is often mistaken for all of public health) engage in political manipulation by adding up only an arbitrary subset of costs and benefits.  Generally they ignore everything people care about in the world other than market expenditures and longevity/productivity, and effectively tell people that they are not allowed to care about anything else.  This is the right analysis for agricultural scientists, but is indefensible for social scientists.  The economists who did the cigarette study are respected researchers not political hacks like most nanny state supporters.  But this particular analysis is fundamentally flawed, and that is perhaps most easily seen because it might cause an honest observer to think like a health promotionista.

Unhealthful News 168 - A followup and some amusing claims about smoking in Washington (state)

Some clarification and followup.  Yesterday I cited a blogger who took down a claim that driving is causing obesity, which was based entirely on the "evidence" that both were increasing almost linearly with time.  His points were entirely right and quite clever, but I felt that he had understated a key point that could aid understanding of statistical analysis more generally.  The point is that it is meaningless to describe two series as correlated if it is impossible for them to not be highly correlated, like if they are both constant or changing almost exactly linearly.  It makes little more sense to say they are correlated than to say 1 and 2 are correlated.

But I think it is worth clarifying that it is possible that driving rates really do cause some of the obesity, but that just looking at the correlation is not the way to figure it out.  The general point is this:  There are right ways and wrong ways to seek evidence of a particular relationship. 

In particular for the driving-obesity case, we would like to be able to control for all of the other variables that are causing the time trend in obesity, and then see if there is anything left that is explained by driving rates.  This is not possible, however, so the next best thing is to just remove the time trend for obesity by just looking at deviations from the trend (blips off of the trend line).  Following this standard approach, we would then look to see if the ostensible cause, driving, explains the blips.  However, in this case it clearly does not, since it is linearly increasing and, since it obviously does not explain all of the upward trend in obesity, this evidence suggests it does not explain any of it. 

As an aside, there are different ways to model a trend and thus control for it.  The simplest is to assume the trend is a straight line on the graph, extended through time, such that the variable "remembers" where it is supposed to be and if it moves off the trend it tends to compensate and go back to it.  Also reasonably simple is the "random walk" around the trend, in which any deviation from the trend creates a new point from which the trend resumes and there is no tendency to get back to the old trend line.  The latter probably describes more variables accurately, and economics often models it; the former is what is almost always used in epidemiology, probably because they do not teach about the other in most epidemiology courses.

Before I wander too far, there is a concrete news target for this.  It was reported this week that:
Washington, a state that has long boasted one of the lowest smoking rates in the nation, has taken a sizable drop from its third-place ranking, tying with Maryland this year for 11th place.
Sounds huge.  Except,
Currently, 15.2 percent of adults in the state smoke, up from 14.9 percent last year, according to numbers from the U.S. Centers for Disease Control and Prevention (CDC).
The numbers from the survey that produces those results wander quite a lot, due to random sampling and other errors.  (They also are substantially lower than other estimates for US smoking rates, but that is another story.)  So the "change" reported there is better described as "no measurable change".

But in that article and an editorial, the Seattle Times joined the bureaucrats whose budget is threatened in seeking to blame this outcome, at least partially, on cuts to the anti-tobacco budget:
...may be attributed to funding cuts to the state Tobacco Prevention and Control Program, which is aimed at reducing tobacco-related disease and death, state officials say.  In the past two years, the prevention program has seen major cuts — almost 60 percent of its funding — with even deeper cuts looming.
The particulars of this are pretty funny:
And in 2009, the Legislature ended the state's anti-smoking advertising campaign.  Ads make a big difference, especially for teens and young adults who are influenced by plenty of pro-smoking ads paid for by tobacco companies peddling flavored cigarettes.
(For those who do not know, there are almost no pro-smoking ads in any influential media, and flavored cigarettes were never a major part of the market and have not been produced by any company with an advertising budget for over five years.)
Consider just one facet, the Tobacco Quit Line, a state-funded prevention program launched a decade ago. The phone service has provided expert advice and useful tools to some 150,000 people trying to kick the habit. But starting July 1, callers will no longer be able to get quit kits, over-the-phone help or nicotine replacement unless they are on Medicaid or have insurance.
(So in anticipation of losing the quit line, lots of people without Medicaid or insurance have started smoking???  Also, look at those numbers: "launched a decade ago" and "150,000 people" – that is about 1/10 of 1% of the smokers in the state getting advice every year.  It seems unlikely that this made much of a difference, especially since I believe there are a few other ways to get information these days.)

Anyway, getting back to the point of the day, we could assess whether it really appeared that the budget cuts were affecting smoking rates.  I realize that the anti-smoking people have neither the skills for nor any interest in doing good science.  But it is possible.  Probably the most useful thing to do would be to wait a year and see if the measured rate tics down again.  In the context of the above brief bit about trend types, the real trend in smoking probably is closer to the random walk, but the measured rate has a tendency to bounce back from deviations from the trend because many deviations are study error. 

For those who want to make an estimate now, they should look at whether budget changes coincided with smoking rate changes at other times, not just the latest politically convenient result.  This is not perfect because there will be confounders, but it could be informative.  I would bet that: most of the decrease in smoking was before 1998; the budget leapt up in 1998 and stayed high following that; there was some decrease in smoking around then, though much less than in, say, the early 1980s when the budget was very small; the smoking rate was flat for a decade despite the budget continuing to be high.  If this is the case, then there is even less of a case to be made for their claim than there is for driving causing obesity.  At least for the latter they really did track each other.

Finally, as another revisit to a previous post, recall how in UN163 I discussed the controversy over an apparently bad health economics study and wistfully imagined what it would be like if health science was held to such standards.  Today, Krugman, who had commented at the outset, added:
So when the McKinsey alleged study made headlines, the firm was pressed to explain how the study was conducted. And it has refused to answer. 
It’s hard to escape the conclusion that the study was embarrassingly bad — maybe it was a skewed sample, maybe the questions were leading, maybe there was no real data at all. Whatever.   
The important thing is that this must not stand. You can’t enter the political debate with strong claims about what the evidence says, then refuse to produce that evidence.
Sigh.  If only that were the standard.  We know about as much about much of CDC's key data about smoking as we know about what McKinsey did in that study, and no one even complains.

Though I suppose maybe the grass is not entirely greener:
And it’s especially bad when the media give your claims lots of attention, while barely covering the furor over the refusal to explain where those claims come from.

Unhealthful News 167 - "Don't worry about it" is not sufficient advice

The health information and advice business (which some would prefer to call the health scare business) is about 90% warnings and simplistic advice and 9% calls to dismiss the former because it is overstated or otherwise flawed.  I am trying to build up the remaining 1%. 

Actually, that is probably ridiculously optimistic – I doubt that the niche that I am trying to fill is nearly 1% of the discourse.  That niche is trying to offer ways to understand the claims without just choosing to believe either the "worry" or the "don't bother about it" faction.  You might think that the "don't worry" advocates would help with that, but they usually get it wrong.

For example, this AP article was an attempt to reduce the worry about two recent health scares (HT to @garyschwitzer for the reference).  In her attempt to reassure people about the mobile phone and cancer scare, which I wrote a bit about in UN158 the reporter wrote that IARC
...said there is a possibility cellphones raise the risk of brain tumors.  "The operative word is 'possibility,'" said  [the American Cancer Society's deputy chief medical officer, Len] Lichtenfeld
Both the reporter and Lichtenfeld got that wrong.  As I explained in UN15 the rating that IARC put on mobile phones was basically "code yellow" or "3 on a scale of 5".  They label it "possible carcinogen" but that is really just an arbitrary phrase and does not have its natural language meaning.  It is based on an unspecified combination of apparent level of risk and certainty of the estimate, both of which turn out to be low in this case, mingled with no small amount of worldly politics.  But in the news story, the reporter and the ACS guy both managed to compound the confusion of that phrase by misrepresenting it as "it is a possibility", which clearly is a natural language statement that is very much not what the IARC report means.  Their reassurances that we should not worry too much about this are valid (though they tend to overshoot and suggest "no one should worry at all, or even investigate further).  But they do not seem to understand enough to offer the most useful possible observations about what IARC said.

The same article talks about the recently reported cancer risk from styrene, formaldehyde, and a few other chemicals, but offers reassurance from Linda Birnbaum head of the National Toxicology Program of the National Institute of Environmental Health Sciences, which issued the report of he risk.  Of course this is not terrible reassuring given that this is the same unit that, as I pointed out yesterday, put out its current report about carcinogens with a section about smokeless tobacco that was roughly 2.5 decades out of date.  But the message from Birnbaum and the reporter was that these warnings were based on occupational exposures (though the story does not actually use that standard term), which are much higher than consumer exposures.  Fine.  But then they go on to declare that consumer exposures therefore pose no risk.  But we obviously do not know that.  It would be fine to say that we have not detected a risk at consumer levels of exposure, but that is different.  One of the advantages of occupational studies is that they lets us look at exposures that are very common at low levels when they occur at high levels, and that allows some guess as to whether they might be causing some problem at the low levels.  If a problem is observed at the high levels, the guess is elevated to "its a possibility" (the real meaning of that term).

Saying "these chemicals are absolutely harmless" is a message that more commonly comes from pro-industry groups like ACSH, but what they wrote about today was what I wrote about two days ago, the new "study" about television watching and risk of diabetes.  (They get really annoyed when people point out they are pro-industry, but it is pretty clear from a lot of what they write that they do not read my blog anyway.)  They correctly point out that the study was of little value, but note only that the results cannot be distinguished from the effects of just being sedentary, regardless of the television.  They suggest that this is the only limitation, substantially understating the limits of the research.  I will not expand on that, since I already did (recall: snacking is up there with sitting still; the study method added no information to what we already had; etc.).

Finally (and the math phobic might want to just quit reading here), the Freakonomics blog takes on the recent suggestion that more driving is causing the increases in the obesity rate.  The author points out that the supposed evidence is that both have been trending up, basically linearly, over time.  He offers the clever counter that his age, which obviously trends up linearly over time, is just as good a predictor of obesity over time.  He goes on to explain that in general, for a variable that follows a simple time trend, almost any other time trending variable will fit it.  He goes on to note that the original authors concede that correlation does not equal causation, but argues that this is an understatement in this case.

He does not complete the explanation, however, and observe that there is not a correlation in a meaningful sense here.  That is, this is not a case of a correlation with some other explanation, but it strains the term to claim there is a correlation.  To try to explain:

You could observe that the height of the Empire State Building is a great predictor of the height of the Chrysler Building, always 62 meters taller, every time you measure them.  But it should be obvious that it makes no sense to declare that they are correlated because they are both constants – each can be described by only a single number, so the variable does not vary.  It takes a bit more thinking to see it, but it is also the case for linear trends, which you might say are a constant of sorts.

Any two series that can be described by only two values also cannot be described as correlated with each other in any meaningful way.  Or to put it another way, they will always be perfectly related by a simple function, which means they are perfectly correlated so suggesting that their correlation means anything is nonsense.  When something must be true, there is no information in discovering it is true.  As an example, consider the two series x={1,3} and y={100,114}; they are perfect correlated, in that the first always perfectly predicts the second with the simple rule "if x=1 then y=100 and if 3 then 144" or if you prefer a linear equation, "y=93+x*7".  But the same is true when the two values that represent each series are not just two observations, but a linear trend (a line can be described with two values in terms of either two points on it, or one point and the slope).

The point is that the driving-obesity result is silly at a much more fundamental level than is implied by "correlation is not causation", one that is not difficult to understand.  The "correlation is not causation" phrase is used to describe situations where there is meaningful correlation between two variables that calls for an explanation (so, for example, it is not explained by "neither changes, so of course the difference is constant"), but the explanation might be some common cause that relates them (confounding) or perhaps causation in the other direction.  But two curves that have the same basic shape will always be correlated  if you choose the right scale, so there is no correlation that needs to be explained.

Failure to recognize this is what gives a lot of useful information a bad name.  People see aggregate trend data and recognize it is not convincing but do not understand why.  So they figure that all aggregate trends are uninformative (creating the fallacy that there is an "ecological fallacy").  This removes the ability to observe, for example, that mobile phones must not be causing many brain cancers because brain cancer rates are not trending up, or perhaps that television cannot explain much about diabetes because it exploded in popularity many decades ago and stayed up, while diabetes rates have changed dramatically over that time.  The difference:  several of those variables have some distinctive patterns, rather than just being straight lines over the whole range, so the bends in the curves should have matched, but did not.

Unhealthful News 166 - Last tango for salt shakers

In the spirit of yesterday's q-and-a quick hit, another one.

CNN reported:
Salt shakers disappear from Buenos Aires tables 
In an effort to combat hypertension, which affects some 3.7 million residents in the province -- nearly a quarter of the population, the health department reached an agreement with the hotel and restaurant federation to remove salt shakers from the tables at their eateries. 
"On average, each Argentinian consumes 13 grams of salt daily, while according to the World Health Organization, you should consume less than five," Health Minister Alejandro Collia said when he announced the change last month.  
….Collia said that if Buenos Aires residents can reduce their daily salt intake by three grams, it could save about 2,000 lives a year.
Will this reduce salt intake?  Almost certainly.  A lot of health affecting decisions, especially those involving immediate pleasures, are sufficiently influenced by convenience that little changes of availability can have an effect.  Of course it is less controversial when that consists of adding a convenience (e.g., making sure snack packs of vegetables are as easy to get as chips) rather than removing one.  The only conceivable way in which this might not decrease consumption is if chefs increased salt use in the kitchen to make up for it, and they overshot, which seems reasonably unlikely.  Chefs might well prefer that people are not allowed to mess with their optimized seasoning.

Will this provide a health benefit?  Ah, now there is the tough question.  A social intervention like this is not like a drug, and things may not work out the way the drug addled think it will (and by drug addled, I mean those who think that drug trials are the right way to think about public health interventions).  Among the proposed problems, the salt reduction might not really improve health (there is substantial legitimate controversy about whether salt matters for other than a small minority who mostly know who they are); people might overcompensate (thinking they are more deprived than they are, or perhaps as a act of rebellion), actually consuming more salt; or people might eat more unhealthful food because the salty food sated them sooner (as happened when people tried to cut down on fat intake).  It is difficult to know.

Is this a good way to find out?  It really is, assuming (a) they try very hard to get good longitudinal data that addresses the points from the previous paragraph and (b) you do not live in Buenos Aires and like salt.  Sadly, I am afraid no one will gather good data (which requires some effort and skill, and especially an interest in honestly determining the effects).  So this will probably be yet another triumph of "health promotion": Do something that sounds like it is healthful and make sure never to honestly study the effects so you can just declare that it did what it was supposed to do; ignore the social downsides, up to and including the downfall of the government; and ten port it everywhere else saying "it was proven in Buenos Aires and everyone should follow their example."

Is banning salt shakers the most patronizing bit of nanny-statism ever?  Definitely not, because this clearly has it beat:
The measure is not as extreme as it sounds. Salt will be available by request, but only after the patrons have tasted their food.
The mind boggles at the thought of a steakhouse waiter refusing the premature request for salt, "lo siento seƱor, pero antes usted debe comer uno bocado" (and I am sorry for the barely remembered, and thus presumably mangled Spanish).  Treating restaurant patrons like fussy toddlers ("no yummies until you eat at least one bite!") might actually be more amusing to see enforced than Nepal's proposal to ban tobacco sales to pregnant women.