In the last post on the Covid-19 vaccine and traffic accidents study (link), I reviewed the analysis methodology: I found it hard to believe the vaccinated and unvaccinated groups differ only by their vaccination status, even after “adjustments”, and thus I am not convinced by their causal interpretation.
In this post, I examine a few issues.
Right at the start, the research team proclaimed:
The proximate causes of most crashes are human behaviors including speeding, inattention, tailgating, impairment, improper passing, disobeying a signal, failing to yield right-of-way, or other infractions.
This assertion is pivotal to the research conclusion, which they exploited by using involvement in traffic accidents as a proxy for rule-breaking behavior on the road. The researchers speculated that an underlying trait of rule-breaking is the common cause of (a) a higher risk of traffic accidents and (b) more vaccine hesitancy which results in a lower chance of being vaccinated.
The causal diagram – even if true – can have few public policy implications. It explains a plausible self-selection bias in the unvaccinated group (inherent rule-breaking personality). It may suggest that if governments successfully “cure” this bias, then we’d simultaneously observe not just higher vaccination rates but also a reduction in traffic accidents, other traffic violations, other crimes, absenteeism in schools, fare evasion in public transport, tax evasion, etc. But solving general unruliness appears to me a much harder problem than solving vaccine hesitancy!
The claim that most traffic accidents are driver-caused is supposedly supported by a third-party reference but that study tallies only traffic accidents involving fatalities (which comprised fewer than 0.2% of the outcomes in this study), and even so, about half of the documented fatal accidents occurred under “normal” conditions, without any rule-breaking.
The causal proposal is further weakened by linking individuals to traffic accidents even if they were pedestrians or passengers, rather than drivers. These people – who could not have caused crashes through undesirable behavior – accounted for almost 60% of all outcomes.
Imagine a drugged out but unvaccinated driver carrying one passenger in the car ran a red light, and hit and killed a pedestrian. All three individuals are counted as involved in traffic accidents. The study authors would have concluded that the unvaccinated driver would not have crashed the vehicle if s/he had been vaccinated. By contrast, if the pedestrian was unvaccinated, then the study authors would have us believe that s/he would not be dead if vaccinated, regardless of whether the driver was vaccinated!
It’s a mess. It’s easier to avoid such a mistake if one collected the data through shoe leather.
***
One factor that drives vaccination behavior has received scant attention – risk tolerance. It’s well known, and not just among economists or psychologists, that humans have varying levels of risk-seeking or risk-averse behavior. Much of the insurance and gambling/gaming industry is built upon such heterogeneity. We also know that the measured risk of Covid-19 infection or case severity depends substantially on key demographic factors such as age, gender, occupation, and prior comorbidities. So it should be obvious that propensity to want the Covid-19 vaccine should vary with (a) predictable risk of infection and/or case severity (b) individual level of risk aversion and (c) individual’s assessment of probability of exposure, including the interaction between these factors.
The public-health narrative tends to adopt an axiom that everyone is alike, and that may be its biggest failure. As noted in the prior blog, the peer reviewers and the study’s researchers failed, by neglecting the most obvious proxies for the above factors – prior traffic accidents, and prior healthcare usage. They have no excuse for excluding these factors because the former was computed for the follow-up period while the latter was used in an exploratory matching analysis. The research finidng will be seen in a new light if it turns out that the unvaccinated group had been involved in more traffic accidents prior to the study period, for example.
***
All over the paper, risk is given in the form x per million chance of getting into traffic accidents. For example, the risk for unvaccinated was described as 912 per million, and for vaccinated, 530 per million. These are tiny numbers made to look huge. Since there were 1.8 million unvaccinated, the ~900 per million means about 900*1.8 = 1,620 unvaccinated people were involved in accidents, while 500*9.2 = 4,600 vaccinated people were involved in accidents. In what period of time?
Glad you asked :) The study has a follow-up period of just one month (30 days)! Literally, the research only covered the month of August, 2021. They did not look at any other month. (It’s quite likely there would be some reversion to the mean over time.)
912 per million is another way of saying 0.0912%. Roughly 0.1% of unvaccinated people were involved in a traffic accident during August 2021, and roughly 0.05% of vaccinated people were involved in a traffic accident. Only 16% of the population were unvaccinated. Even if we assumed correlation is causation, and if we forcibly vaccinated everyone, the impact of such a policy is 1.8 million * (0.05%) = 900 people. It’s 900 people out of 11 million, and remember, some of these are pedestrians or passengers, not drivers. Most of these accidents have zero fatalities.
This number represents the risk in a single month, not annualized. There is no explanation in the paper as to why such a short follow-up time was applied. Perhaps it has to do with the continuing vaccination campaign, so that the number of unvaccinated – and thus potential beneficiaries – keeps declining. (For those interested, none of the main results in the study censor or remove unvaccinated people who subsequently got vaccinated during or after August 2021.)
***
As with most Covid-19 observational studies, researchers selected a start date for counting outcomes, and didn’t bother to demonstrate that their findings are invariant to the choice of such a date. In this study, the follow-up start date was set as July 31, 2021, so the follow-up period comprises the single month of August 2021.
This is a strange date on which to settle. Take a look at the case curve in Ontario:
At the end of July 2021, cases in Ontario were at a trough, and during August, Covid-19 cases were steadily growing. This entire month missed the next peak of infection, which was toward the end of the year. In each Covid-19 wave, hospitalizations and deaths lag cases. The following curve shows Covid-19 deaths in Ontario:
Thus, many of the deaths associated with cases reported in August showed up after the 30-day follow-up window (in September). We should ignore any commentary on hospitalizations and deaths - specifically, the claim that this study “validate[d] that vaccination is associated with large reductions in subsequent COVID pneumonia.”
The authors pointed to Table 3 to support the COVID pneumonia claim. There, we find a total of 5,358 cases. This number is five times higher than the hospitalization count shown in the above chart for August 2021 (31 days x < 30 per day), implying less than 20% of patients with COVID pneumonia are hospitalized. This contradicts the description by Cleveland Clinic that says "If you’re diagnosed with COVID pneumonia, it’s likely that you’ll be admitted to the hospital." (link).
What else happened during August 2021? The Ontario government started to differentiate testing policies between vaccinated and unvaccinated mid-month (link). In some settings, only unvaccinated people were subject to routine testing. The more tests, the more cases found.
What else also happened during August 2021? According to this press release, the provincial government issued a vaccine mandate for "high risk" settings, made third doses more widely available, and expanded Pfizer vaccine to younger children. These policies create selection bias in the analysis groups.
The timing issue is an endemic problem with all Covid-19 observational studies. Researchers should repeat the same analysis methodology applied to different time periods and show us the results. I have a hunch that many of these results are transient.
P.S. [1/11/2023] Added link to the original study
I agree it's a mistake to treat passengers and pedestrians equally to drivers, but it's absurd to say that they "could not have caused crashes through undesirable behavior". Reckless pedestrians or passengers can *absolutely* lead to accidents!
Posted by: DJAD | 01/11/2023 at 11:02 AM
DJAD: Sure, I should have phrased that not as an absolute. Your comment raises the same problem from another angle: if we assume (wrongly) that passengers are the cause of most crashes, in that case, the drivers would not have benefitted regardless of vaccination status since they did not engage in rule breaking.
Posted by: Kaiser | 01/11/2023 at 12:16 PM
A couple of key assumptions being made:
1) "Involved in" an accident means "at fault" in the accident.
For bicyclists and pedestrians, this is obviously untrue. But even if one car hits another, generally only one is at fault.
2) Assumption that unvaccinated is a "rule breaker." Vaccines were first targeted to those most at risk because of underlying medical conditions. It may simply be that unvaccinated people are, on average, healthier and more likely to be driving.
Posted by: Dave C. | 01/11/2023 at 12:22 PM
DC: This is why they should have adjusted for prior traffic accidents, prior healthcare usage, etc. Or at least showed statistics comparing the two groups so we can see whether there was preexisting bias. I'm surprised they didn't do this as it is pretty standard for observational studies to compare pre vs post rather than just look at post.
Posted by: Kaiser | 01/11/2023 at 04:50 PM