Start by considering the epidemiologic estimates for the increase in risk from second-hand smoke. The evidence, when assessed by someone who is not intent on promoting smoking bans, puts the risk at so close to zero that it is impossible to say from the data that risk definitively exists. (There are decent theoretical reasons to surmise that there is some risk, so it seems safe to assume that the risk is not zero. But it is small.) A few studies of people who have experienced the most extreme long-term exposure get numbers like a 20% increase in risk. As is inevitable with random sampling and publication bias, there are a few that go tens of percentage points higher.
So let us consider the possibility that the risk is a bit higher that 20% -- that is, a nonsmoker who is exposed to second-hand smoke is over 20% likely to have a heart attack, at all or much sooner, compared to that person not being exposed. (Re the "much sooner" point, see the observation from Part 1 that a very-short-term harvesting effect would wash out of the annual statistics.) This number is unrealistically high and at most might be considered a worst-case estimate of the risk for those with the highest accumulated lifetime exposure. But even if it were the average effect for those with passing exposure at smoking-allowed restaurants and bars, it would obviously be far higher than the effect of that exposure averaged across the whole population. Only people who were exposed in the first place would have that risk, and only those who go from exposed to unexposed as a result of an intervention can benefit from it.
How many people go from being exposed to restaurant/bar smoke to unexposed as a result of the ban? It is a bit fuzzy to define this since there will be a lot of people whose exposure is reduced, and a spectrum of how much it is reduced. But we can start with the observation that roughly half of everyone had approximately zero such exposure before the ban, never or almost never going out to eat and drink, or avoiding smoking-allowed venues when they did. (To really get this right, we would need to figure out the portion not of people but of total risk -- a 20% risk increase for an exposed 70-year-old would cause a lot more absolute risk than the same percentage would for the 25-year-olds who pack some bars -- but it seems likely this would strengthen the point I am making, since the older highest-risk people tend to go out to party less.) Thus, even if you believed that exposure at the level of visiting restaurants and bars causes somewhat more than 20% increase in risk, which is an absurd belief in itself, there is no possible way the effect of the smoking ban could be more than about half of the claimed 21%.
Not only are there a lot of people who were not exposed in the first place, but many of those who are exposed are smokers (where do you think the smoke comes from?). No one seriously claims that the minor increase in exposure from second-hand smoke dramatically increases the risk for a smoker, on top of the already substantial risk increase from smoking. Perhaps it does somewhat, but it is going to be a lot smaller than the effect on a nonsmoker. Many others who are no longer exposed in bars after the ban are still exposed at home -- perhaps more since their smoking spouses do more of their smoking at home or in the car before arriving at a venue. Furthermore, most of the people who experience a substantial reduction in their total exposure -- all but the nonsmoking workers and hardcore nonsmoking barflies, rather tiny percentages of the population -- experience a reduction of an exposure that was far less than the extreme exposures that sometimes generate measurable effects in epidemiologic studies.
This is enough to show that the 21% estimate is utterly implausible. Taking it further, what does this way of looking at it suggest would be a plausible maximum effect of a bar/restaurant smoking ban?
To start, even a 5% increase in risk from the bar/restaurant exposure would be a high estimate of the effect for everyone except the aforementioned workers and barflies. We can figure that half of the population was not exposed in the first place, that easily a third of those exposed were smokers, that many of those exposed had very minor and occasional exposure, and that many others that were exposed had only a minor reduction in exposure since most of their exposure was elsewhere. So it seems unlikely that even one-fifth of the population experienced a substantial reduction in exposure, getting the effect down below 1% of the total. Even if we allow for a greater effect for the small highly-exposed minority, as well as some small effect for those with a very small reduction in their total exposure, it is difficult to come up with any at-all-plausible scenario that results in a reduction of more than about 2%. (And keep in mind that this still depends on assuming the 5% increase in risk in the first place, something that is largely speculative. Thus the real figure could be much lower than even this.)
Perhaps the full details of this analysis might call for more than common sense, but I have to assume that most people who thought about it would realize that claims of 10% reduction, let alone 20%, are completely incompatible with reality. This brings us back to the question I asked in Part 1:
So, who would be stupid enough to believe this claim?I suppose I phrased that too harshly to be a general statement: The average casual reader of the news does not have time to think through most of the claims that they hear -- about the benefits of wars, the causes of unemployment, or health claims -- so their failure to question the claim should not be attributed to poor judgment on their part. They just do not have time to judge. But I will not back off on the harsh accusation when talking about news reporters and other opinion leaders who spend more than a few minutes on the topic.
Several North Carolina local news outlets reported the story without a hint of questioning the result. Once again, it becomes apparent that the journalism curriculum for health reporters no longer includes the classes that teach governments lie habitually and that perhaps when someone (anyone) puts out a press release that claims "hey, everyone, look! statistics show that the decision we made was a good one and did everything we said it would" it is perhaps not best to just assume they are correct and transcribe their claims. Can you imagine if these guys were teachers? "Class, now that you have finished your quiz, I am putting the correct answers up on the screen. Please grade yourself and write your score on the grade sheet that I am passing around."
The good news might be that the national press was so bored of these claims (not critical, just bored) that it does not appear to have been picked up by any national news outlet. But that did not stop Stanton Glantz and his junk science shop at UCSF from posting about it (h/t to Snowdon for reporting that post), and you can count on it showing up in future national news stories where these hacks are quoted. We would not expect the thoughtful analysis like the above from these people; we can count on them to repeat (repeatedly) any absurd claim like the one from NC as if it were correct. Indeed, we could count on them to conveniently ignore any result that was down in the realistic range.
(Q: How do you know if Stanton Glantz is spouting junk science in support of his personal political goals, damaging both science and public policy? A: They have not announced his funeral yet. Interestingly, it is not entirely clear whether he spouts junk because he has not acquired a modicum of understanding about the science in the field where he has worked for decades, or because he is a sociopath-level liar; I am not entirely sure which is the more charitable interpretation.)
So that is the easy side of this analysis, wherein reporters transcribe claims that are obviously wrong and extremist activists embrace them because they are wrong. In Part 3 I will go into some details of the modeling that are beyond the abilities of reporters and junk-science activists, but that emphasize that those who reported the results are lying and/or are attempting analyses that are way beyond their abilities, and presumably know it.