Some years ago, I started using the term "story time" to describe a common practice in research studies when they lull us with some data analysis, and then feed us conclusions that are unsupported by the data while we're falling sleep!
I'm starting to see "story time" in reporting on vaccine trial results. I suspect that much of this is unintentional, arising from a shallow understanding of the analyses. Here is a recent example from an Independent article about care homes in the U.K. seeing high levels of infection after a single dose of the Pfizer vaccine. The reporter stated:
Research suggests the Oxford vaccine does give some partial immunity after a delay of between two to three months but trials for the Pfizer jab did not compare different dose intervals.
This statement is partially true in the sense that the Oxford study provided an analysis of dose intervals while the Pfizer study did not (see my earlier post here). The story time is the moment when we infer that the dose interval was studied in a well-designed clinical trial.
***
The Oxford study was not originally designed to study dose intervals. In fact, the original design of the Oxford study was a single dose, which means there isn't such a thing as dose interval. In the middle of the study, the researchers added a second dose. By that time, a large proportion of the participants in the U.K. were already 30 to 40 days since receiving their first shot. This fact biases the study because it does not have much data on shorter dose intervals.
Like Pfizer (and every other study), the revision of the Oxford protocol stipulated a target dose interval (4-6 weeks). (Note that they later drew conclusions about 8 to 12 weeks.) There is a statistical reason for controlling the dose interval - we don't want to add new sources of variation to the outcomes, and dose interval is considered a relatively minor factor.
Somehow, the AstraZeneca staff could not control the dose intervals, ending up with some participants getting their second dose in "fewer than 6 weeks" and others getting it in "more than 12 weeks." (They never disclosed the entire range of possibilities.) Therefore, they have accidentally attained variation in dose intervals. While this allows them to conduct an analysis of dose intervals, it has the side effect of muddying the headline efficacy analysis.
From there, they did an analysis of the effect of dose intervals on efficacy. Wait a minute, the dose intervals were not randomly assigned to the treatment population. In a proper randomized controlled experiment, several dose intervals will be pre-selected, and participants will be randomly assigned a dose interval. Since this is no longer a randomized controlled experiment, we cannot interpret the analysis as if the dose intervals were randomized.
When the dose intervals were not randomized, we cannot assume "all else is equal". The observed difference in efficacy may be caused by factors other than dose interval. It's not clear how they got people back for their second shots, and why they couldn't give people the second shot within their target window for the latter enrollees. Further, I am not convinced that you can compare vaccine and placebo groups with similar dose intervals without investigating selection bias.
The "story time" is to lull us in with a randomized controlled experiment and as we fall asleep, feed us less reliable conclusions that come from an embedded observational study.
[P.S. I have an ongoing series that digs into the details of the Astrazeneca-Oxford Phase 2/3 trial report. So far, the posts are 1 on overall efficacy and asymptomatic cases, 2 on protocol alterations, 3 on researcher degrees of freedom.]
Comments