News just broke that the FDA may announce more "rigorous" standards for a coronavirus vaccine to get approved. This indicates a willingness to listen to some of the critics of the perceived rushed job.
The specific rules mentioned in this Washington Post article are:
1) Participants must be followed for a median of at least two months, starting from when they receive the second dose
2) The analysis must be conducted after at least five severe cases of Covid19 have been observed in the placebo arm.
3) The analysis must be conducted after some minimum cases of Covid19 in older people has been recorded.
***
Rules 2 and 3 are fine in spirit. It's important that the vaccine works for severe cases and older people. These rules ensure that all trial analyses include the two groups that are of great concern to the medical community.
Recall that the case definition uses mild symptoms instead of severe symptoms. Since a lot more people develop mild symptoms, it's possible that the interim analysis contains only mild cases. It's also possible because of self-reporting that the cases used in the interim analysis involve only younger people.
I suspect those scenarios are unlikely anyway. Because testing is triggered by self-reporting, severe cases are more likely to surface than mild cases. So long as they have sufficient older participants in the trial, they should show up in the interim analysis because older people tend to have more severe illnesses.
Nevertheless, if the vaccine is approved by the end of 2020, it will be on the strength of an interim analysis. Such analyses are never designed to make statistically reliable statements about subgroups, such as older people or severe cases. This is another reason why the trial should run to completion even if the FDA grants interim approval.
***
Revised rule 1 is misleading and potentially worrying.
Let's talk about its good intention first. Rule 1 sets up a second criterion for triggering the interim analysis. Previously, the only criterion is to exceed a prespecified total number of cases. This creates the potentially undesirable scenario in which enrollment is supercharged (either by pushing marketing, or relaxing admission standards), or the true case rate is markedly higher than assumed, such that the first interim analysis is triggered when most participants have barely taken the second dose.
By setting observation time as a second criterion, the new rule provides more confidence that the protection afforded by an approved vaccine lasts at least 2 months.
As I explained in this previous post, Moderna does not count cases until 2 weeks after the second dose. So if they track people for two months, it's really only 1.5 months of observation time. That isn't a lesser worry. What perturbs me is the use of the word "median".
Median is the middle person so the statement actually allows up to half of the participants to be observed for fewer than 2 months. Further, it is highly unusual and in fact statistically fallacious to perform analysis in which different participants are observed for different amounts of time. I hope this is a communications faux pas, a misunderstanding on the part of the journalist. The sensible rule is to require every participant (not the median participant) to be observed for at least 2 months (preferably longer) after the second dose.
P.S. [10/6/2020: A group of scientists and medical experts agree! STAT News writes about their pushback on rule #1. They want "a minimum of two months' observation for all participants".]
I am somewhat surprised by the bolded statement in the last paragraph. An analysis with varying followup is only fallacious if the timing is ignored in the analysis. An entire subfield, survival analysis, was developed to properly analyze such data. Indeed, the protocol specifies Cox proportional hazards regression as its primary analysis method, which does incorporate the followup information. We can argue whether the proportional hazards assumption is reasonable, especially in the light of a second dose, but that does not seem to be your point.
Your concern about median followup restriction of 2 months makes sense in terms of safety concerns, but I don't think it is a direct threat to the validity of the survival analysis.
Posted by: Aniko Szabo | 09/29/2020 at 09:27 AM
AS: Great comment. I agree that I should qualify the bolded statement by adding "without making assumptions and adjustments". Now that you opened this can of worms, I do wonder about the survival analysis.
Here's my understanding of how one would do survival analysis when the population has variable observation frames: on the day of analysis, every participant's data are frozen; the median observation time is computed, let's call it two months; the maximum observation frame, say, is three months; anyone who has been observed fewer than three months and has not been infected or dropped out is regarded as censored (due to interim analysis).
Without doing a simulation, I'd guess that this means:
a) by design, we introduced a type of censoring determined by enrollment time (or observation time). The earlier a participant enrolls, the lower the chance of censoring.
b) almost everyone in this analysis is censored by a number between one day and 3 months minus one day.
c) this censoring is treated as if the participant has dropped out although unlike a drop-out, this censoring is forced by the analysis design (most of which "disappear" if we wait till the full analysis).
d) the uncertainty band increases dramatically as observation time increases since we have the full interim sample on day 1 and have almost no one by month 3
e) at the median observation time (two months), exactly half the sample size of the interim analysis contributes to the estimate of the hazard function, probably too wide to be useful
f) basic survival analysis isn't magic; by assuming that the censoring is independent of the outcome, the data accumulated during the shortened observation frame can be combined with the rest of the data. The people who have been observed 3 weeks do not improve our estimate of the hazard function beyond 3 weeks, and if the goal is to establish that the protection period is longer than 2 months, I think we have a problem.
To summarize, I'm concerned because forced censoring coexists with a reduced sample size for interim analysis plus a compressed time line due to early reading. And I'd like the vaccine to have at least two or even three months of protection.
Let me know if there are other adjustments I'm missing.
Posted by: Kaiser | 09/29/2020 at 06:16 PM
For reference, the letter from experts about rule #1. They have the same concern as I laid out here. https://www.statnews.com/pharmalot/2020/10/06/fda-covid19-coronavirus-pandemic-vaccines-trump/
Posted by: Kaiser | 10/06/2020 at 06:05 PM