In my previous post, I walked through how vaccine effectiveness is being divined from real-world data, and why this currently popular method of analysis results in substantial overstating of VE.
Today, the LA county health department issued a press release that performed this flawed analysis by the book. I'm adding this supplement since the Public Health England study from which I obtained the data used them for a different calculation (also flawed but that's a different post).
The pivotal paragraphs are these (my italics and bolding):
As more people have gotten vaccinated, the proportion of total cases that are among those vaccinated has also increased. This is to be expected because as more people are vaccinated, the number of fully vaccinated people becoming infected will increase. In June, fully vaccinated people represented 20% of all cases diagnosed among L.A. County residents, while unvaccinated and partially vaccinated people accounted for 80% of cases.
Public Health estimates if the 52% of County residents that are fully vaccinated were not vaccinated, the amount of new cases would perhaps be double because everyone would instead have the same risk of infection as unvaccinated people do. While County numbers have been going up, they would be much higher if there weren’t as many people fully vaccinated.
What was done in this analysis?
It starts with cases happening in a given window (June here, corresponding to April 12-Jun 4 in the UK dataset). It then divides these cases into two groups, "fully vaccinated" and "others". Fully vaccinated typically means cases occurring at least 14 days after the second shot (or first J&J shot). We have no idea if they also remove mild cases from the fully vaccinated case count - other wording in the press release implies they did not.
Separately, they tallied the number of people who are fully vaccinated as of the end of the study period, and then called everyone else "unvaccinated". The Californian population is split roughly half-half at this time.
Using those values, they computed case rates for the "fully vaccinated" and "unvaccinated" (others). They then claimed that unvaccinated and vaccinated people are the same in all ways except their vaccination status, and therefore imputed the counterfactual case rate for vaccinated people (assuming that they had not been vaccinated) as equal to that of unvaccinated people.
With that methodology, they said the number of cases would have doubled. That implies that the vaccination cut the case rate by half, which is a real-world VE of 50%. So, buried in this press release is a huge backtrack from earlier claims of 90-100% effectiveness.
***
The reality may be worse because the methodology has clear flaws.
First, the at-risk population for the vaccinated group is over-estimated because of immortal time bias. The only people who could contribute cases to the June tally are those who got their second shot before the middle of May. Those who took their second shots in the last few weeks are added to the denominator but cannot possibly contribute cases to the numerator of the case rate, thereby under-estimating it.
Second, the number of cases attributed to "unvaccinated" is over-stated because this analysis counts (a) infected people who are in between shots (b) infected people who have taken two shots but not yet reached 14 days, all as "unvaccinated cases". In addition, the 2D+14 case-counting window cannot be applied to unvaccinated people since they didn't receive any shots.
Third, the at-risk population for the unvaccinated group is under-counted. Consider someone who got fully vaccinated on June 3 but tested positive on June 5 (someone of type b from the previous paragraph). This person is classified as "fully vaccinated" as of the end of the study period. However, the positive case is counted as "partially vaccinated" which then falls under "unvaccinated" because the infection happened prior to 2D+14 days. This is equivalent to giving the case to the unvaccinated group but counting the person in the vaccinated group, simultaneously upping the case rate for the unvaccinated and reducing the case rate for the vaccinated. The more vaccinations are given out during the study period, the larger is this bias.
The above biases arise from the methodology. The biggest issue in real world data is selection bias: people who decide to get vaccinated are different from people who decide not to. Recent studies no longer offer any analysis of selection biases. The earlier studies indicate that the unvaccinated group may skew more minorities, and people living in areas with higher case prevalence.
Recent Comments