You have read my arguments against one-dose vaccine schedules. What is the argument on the other side?

The only technical justification I can find on how the U.K. justified its decision to deviate from the vaccine dosage schedule that has tested in the clinical trials is this document issued by the Joint Committee on Vaccination and Immunisation, citing an analysis by Public Health England (link, Appendix A).

The entire argument hinges on the claim that "short term vaccine efficacy from the first dose of Pfizer-BioNTech vaccine is calculated at around 90%".

Wait - if you've been reading my prior posts, you'd ask: wasn't the vaccine efficacy around 50% right before the second shot? I've been showing this chart, which is derived from the cumulative-cases chart published by Pfizer:

The British experts acknowledged this much: "Published efficacy between dose 1 and 2 of the Pfizer-BioNTech vaccine was 52.4% (95% CI 29.5 to 68.4%)."

So, the question is how did they make magic? How did the 50% move up to 90%? Notice that this grade inflation moved the value way above the upper bound of the 95% confidence interval.

***

The short answer is they cherry-picked the data. They re-computed the efficacy by counting only cases that occurred between Day 15 and 21. This has the effect of pushing up the VE because most of the cases on the vaccinated arm occurred on the front part of this curve.

Deviating from the results of the vaccine trial is to allow assumptions and gut feelings to modify the science. I am not a data purist, and believe there are some situations in which incorporating assumptions and guts improve the analysis. I want you to decide for yourself whether this re-calculation is justified. So I'll provide some notes on the issues you should be thinking about.

A key claim is that any vaccine takes time to generate an immune response and so it's "unfair" to start counting from the beginning. Note that this modification literally turns positive cases into negatives because the number of participants in the trial (the denominator) is not changed.

Because vaccine efficacy is computed from comparing the case rate of the vaccinated arm with that of the placebo arm, the trial design has a built-in control for exactly the problem being raised. If the vaccine were completely useless until day 10, then we'd have counted about the same number of cases up to day 10 on either arm.

How do they know whether it takes 5 or 7 or 10 or 15 days for the first dose to take effect? How did they draw the line at Day 15?

From the document: "Figure 3 [the cumulative cases chart] clearly shows that from approximately 10 days after the first dose the cumulative incidence in the vaccine and Placebo groups diverge. It would therefore be appropriate to calculate the VE of a single dose in a period after this 10 days."

Day 10 slipped to Day 15 in the next sentence: "A reasonable interval to use for post first dose VE would therefore be from >14 days to the time of the second dose..."

Also worth considering is the tiny sample size underlying an analysis at Day 21. Recall that the trial was stopped when a certain number of prespecified cases was reached. That minimum was not reached by Day 21. This leads to hilariously big error bars. The error bar around that 90% number is 52% to 97%. In plain language, the data was consistent with any VE between those two bounds. Think about that. (By contrast, the proper analysis that used all of the data and placed VE at 95% with median two-month observation yielded an error bar of 90% to 98%.)

***

As a reminder, the entire argument is based on the supposition that with just one dose, the "short-term efficacy" is 90%. What does "short-term" mean?

That 90% number is computed based on what happened between days 15 and 21 in the trial. Thus, short term is 7 days. Beyond Day 21, everyone in the trial received a second shot so it is no longer possible to separate the effects of the first and second doses.

Accepting the first shot really reduces the case rate by 90%, and the sum of two shots reduces the case rate by 95%, we know for sure that the 5% difference between those two values is not statistically significant. (The error bar for the 90% estimate is roughly 50% to 100%.)

In other words, if the above were true, then the second dose would not generate a statistically significant increase in efficacy above that afforded by the first dose. In other, other words, the second dose is useless.

Believing the U.K.'s calculation of first-dose efficacy is tantamount to believing that the second dose is useless. If the second dose is useless, no one, including me, could object to giving twice the people the first shot. The question is whether that strategy works if the first shot is 50% effective - which is the "published efficacy".

***

Finally, what is the efficacy of the first dose after day 21?

The U.K. experts now make this argument: they claim that the second shot would have no effect on the first 7 days, therefore one can look at Days 21-28 as if the second shot were not administered. Thus, the period of protection of the first dose is not just the 7 days (Day 15-21) but 14 days (Day 15-28).

How should we think about the effect of the second dose? In a simplistic additive model, we assume the two doses have independent effects which can be summed to get the overall effect. Most vaccine researchers believe that the second "booster" shot reinforces the work of the first shot. In this case, a model with an interaction effect is more appropriate. This means that the combined effect of two shots can be larger than the sum of the individual effects.

The interaction effect may be even more complex. The effect of the first dose may be waning, while the second dose reverses the decline in immune response.

In summary, it appears that two further assumptions have been made implicitly. First, that the effect of the two doses are independent and additive. Second, beyond day 28, the VE will hold steady. This implies that the second dose has no material effect for about 12 weeks (which is claimed as the maximum recommended delay between doses).

P.S. [1/27/2021] It seems like after I wrote this post, a second appendix (B) was added to the UK Government page. This explains how they took the vaccine efficacies and projected averted deaths. On a quick glance, I don't see any details of the mathematics. In particular, I like to know if this model incorporates lockdowns and personal measures, and how vaccinated people will behave. As indicated above, if the first dose of Pfizer is assumed to be 90 percent efficacious, which is statistically indistinguishable from 95%, I can't imagine any model would prefer two doses.

I assume the choice of 2 doses was based on antibody response. From https://www.theguardian.com/world/2021/jan/19/single-covid-vaccine-dose-in-israel-less-effective-than-we-hoped the second dose achieves a 6 to 12 times increase in antibodies. I guess that is because the immune system is already primed to create antibodies. The question that I raised in another comment, is what happens if the doses are spaced 2 months or more apart. Does anyone know?

I agree that any result based on a weeks data would have so few events that it would produce useless results.

Posted by: Ken | 01/29/2021 at 12:49 AM

Ken: I'm not liking the switch to looking at antibody response. For one, if that is the right measure, then all of the trials should be based on that outcome but none of them do. Secondly, how many vaccinated people did they test for antibody response? I can't imagine this is a large number. And I agree with you that a priori, all three major vaccine developers tested two doses in Phase 3. In fact, AstraZeneca changed their testing protocol mid-way through the trial to go from one to two doses!

Posted by: Kaiser | 01/29/2021 at 02:14 AM

Kaiser: I agree that antibody response is a poor surrogate especially as they don't seem to have any data linking vaccine efficacy to antibody response. Presumably at some stage Astra Zeneca realised that their vaccine in a single dose form wasn't good enough. It then didn't help that they had a manufacturing problem and initially produced half strength doses. This seems to have been a massive oversight, as AstaZeneca should have been testing the manufacturing contractors product.

Posted by: Ken | 01/31/2021 at 12:29 AM