Today is a good day to review some of the things you've read on this blog all through the pandemic.
***
As governments pushed very hard for third doses of the Covid-19 vaccine, it may feel like ancient history when - back in January, only about six months ago - many experts were pushing for one-dose mRNA vaccines, claiming that a single dose gives you close to 90% effectiveness.
I sounded the alarm in a post called One Dose Vaccine Elevates PR Over Science (January 2021), which began with:
"I fear that the U.K. policy of one-dose vaccines will backfire, and cause the pandemic to continue longer than necessary."
We are now certain that those who argued for single doses advocated a policy that likely resulted in unnecessary suffering.
In another post, appearing right after the Pfizer EUA in December 2020, I laid out the reasons why the data cannot support a one-dose regimen. (One Dose Pfizer is Not Happening and Here's Why) Nothing I said in December has aged. They are all still true.
***
In that same post, I predicted that "Partial protection provides a convenient excuse for vaccinated people to do away with inconvenient mitigation measures." Little did I know that CDC would subsequently endorse this folly by telling vaccinated people to drop their masks.
The CDC guidance was based on hope-fueled, over-interpretation of the data. Until recently, most experts have claimed that the vaccine stops infection, even asymptomatic infection, and spread. Now, they retracted those claims. But none of those outcomes were ever formally measured in the clinical trials.
Eight months ago, when vaccinations were just starting, you read here: "I think two doses are closer to 70 percent effective in reality, and we don't know that the vaccine stops asymptomatic spread, and so continuing to reduce contacts is advisable." Many places that removed those restrictions have been forced to reimpose them.
***
"One of the key lessons of managing this pandemic so far is that good data drive good decisions, and bad data drive bad decisions. Unfortunately, policymakers have signed up for bad data, so no one should be surprised if future policies turn out badly."
That was the conclusion of another post from January, Actions have Consequences: the Messy Aftermath of One-Dose Pfizer. The situation, as of August, has not improved. At every turn, governments have failed to collect relevant data, sufficient data, and good-quality data. In fact, they have actively interfered with the ability to learn from data.
The central example from that January post explains one (of many) reasons why real-world vaccine effectiveness studies have wildly exaggerated the effectiveness of the mRNA vaccines.
"A fundamental best practice of running statistical experiments on random samples of a population is that once the winning formula is rolled out to the entire population, the scientists should look at the real-world data and confirm that the experimental results hold."
"The action of the U.K. government (and others who may follow suit) has severely hampered any post-market validation. It is almost impossible to compare real-world evidence with the experimental result, because most people are not even getting the scientifically-proven treatment per protocol!"
That's right. The original vaccine efficacy measure came from clinical trials in which the participants followed a precisely-timed two-dose regimen. In the real world, many countries deviated from the prescribed protocol, adopting policies such as one doses, or extending the dose interval from three weeks to three months, or mixing and matching vaccines. It is an insult to science to pretend we are comparing apples to apples.
These issues did not catch anyone by surprise. When I raised those concerns in January, Pfizer was just being rolled out and not a single real-world study has commenced.
A particularly galling detail is very telling. In the U.K., the average duration between two doses is 80 days (almost 3 months!) When the U.K. studies apply the (in)famous 2D+14 case-counting window - that is to say, the researchers nullify all cases that occur after the first shot and before 14 days after the second shot, they are discounting cases for more than three full months! The rationale for the case-counting window is that the vaccine requires time to attain optimal effect - surely, a vaccine that needs 94 days to become effective isn't one that has practical value!
***
It is sobering to look at the data today (well, yesterday when I pulled this chart), courtesy of the fine folks at OurWorldinData:
Israel and the U.K. won countless headlines in the first half of the year when their case rates dropped to historic lows - they were the loudest in attributing the entire drop in reported cases to one and only one cause: widespread vaccinations. Today, they have some of the highest infection rates, much higher than in South Africa and India, which had experienced brutal surges but have relatively low vaccination rates.
In the first week of May, I wrote a post called Curve Watching During the Pandemic. Even when the cases were low in Israel and the U.K., if one was willing to look at non-conforming data, one would have noticed gaping holes in the idea that vaccinations were the sole explanations for the trends at the time.
In that post, I concluded: "Simple one-factor models aren't going to work to explain trends across countries and time. A good causal model should include a baseline trendline, a vaccine factor, plus lockdowns and other mitigation measures." To this day, none of the studies covered by the media follow this strategy.
***
It is well past time to admit that the real-world vaccine studies have dramatically over-stated the effectiveness of the mRNA vaccines. This is a failure of science. Not only, it is a systematic failure because I do not know of a single study that has committed the opposite error of under-estimating the VE! It's clear science has not fielded an A team in this crisis.
On this blog, I reviewed many of the influential real-world studies that created the narrative that vaccines are as effective in the real world as they were in clinical trials. I have indicated a wide variety of problems with such studies, and all the reasons why their conclusions are over optimistic.
If you are interested in research methodologies, read this series of posts from March 2021:
Real-world Studies: How to Interpret Them (Mayo clinic/nference)
Real-world studies: limits of knowledge (Mayo clinic/nference)
The Confusing Picture in Israel and in the Israel Study (Israel Clalit)
Note on a Simpson's Paradox in Real World Vaccine Effectiveness Studies (Denmark)
What the Danish Study tells us about the CDC Study on Real-World Effectiveness (Denmark, CDC)
Eventually in July, Public Health England published a study that provided some raw data which I harnessed to explain these abstract concepts. This exercise led to a post called Real World Vaccine Studies Consistently Overstate Vaccine Effectivness. In this post, I laid out several key adjustments that should be made to real-world VE calculations, and how these adjustments would have resulted in far less rosy pronouncements. My conclusion about mRNA vaccines: "A realistic estimate of VE is probably closer to 60%, which is an excellent number."
This is no mere squabbling amongst scientists. I repeat what I said above: "Good data drive good decisions, and bad data drive bad decisions." Throughout this crisis, the policymakers did not work with good data, by this, I include good analytical findings.
***
In my very first review of a real-world study (Mayo Clinic/nference), I stated the fundamental challenge facing real-world studies: "The simplest first analysis is to compare the case rate of the vaccinated people to the case rate of the unvaccinated people. This is hopelessly flawed because in a real-world study, we must not assume 'all else equal.' People who have received the vaccines at this point are apples and oranges to those who haven't."
That was March 2021. Fast forward to August, and I'm saddened to report that the situation has deteriorated. Unfortunately, the more recent studies have almost all adopted the "simplest first analysis" that is "hopelessly flawed".
Take the recent full court press relating to the first few days of data after Israel started giving third doses to older people. On the next day, the case rate amongst those who got the third shot is immediately below those who only had two shots. To a statistician, this gap presents the best estimate of selection bias we know of - no vaccine can be expected to work at full strength within one day so the more likely explanation for it is that people who rushed to the front of the line have lower baseline propensity to get infected by the coronavirus. In other words, the third dose unintentionally reveals who the lower-risk people are.
I discuss this in a recent blog post, which showed up on the dataviz section of my blog. This selection bias is yet another reason why the early real-world studies - even those using more advanced methods - over-estimated the VE. Thus, I concluded: "Statistics is about grays. It's not either-or. It's usually some of each. ... When they rolled out two doses, we lived through an optimistic period in which most experts rejoiced about 90-100% real-world effectiveness, and then as more people get vaccinated, the effect washed away. The selection effect gradually disappears when vaccination becomes widespread. Are we starting a new cycle of hope and despair? We'll find out soon enough."
***
This is a very long post to say thank you for your past year of support, and tell your friends about this blog. I can't promise I will get everything right but so far, the record looks good :)
P.S. The big news of this week is the FDA's "full approval" of the Pfizer vaccine. I am not devoting more space on the blog for it than this paragraph because there is nothing to discuss: the only additional information that was publicized since the interim analysis in December was the "6-month" update, which is nothing but (link). This was not a decision based on science. In fact, on cable news yesterday, the experts were not talking about the science. The entire rationale for this decision sounded as if someone at the FDA spent time reading consumer surveys. I refer you to Peter Doshi who nicely summarized the issues at BMJ (link).
Comments
You can follow this conversation by subscribing to the comment feed for this post.