This is Part 2 of my notes about the Moderna vaccine trial. Read Part 1 here.
***
Durability of Protection
As described in the last post, the full analysis will not take place until middle of 2021, so the vaccine developers are hoping that their vaccine would show enough during the first or second interim analysis to accelerate the time line. Early reads increase the chance of false-positive findings, as I outlined in a previous post. To counter this, the study drastically lowers the p-value threshold to reach statistical significance in the interim analyses. (Instead of p = 0.05, the first interim analysis in effect sets p = 0.0002.) This is a proper thing to do.
However, if the first interim analysis results in a vaccine approval, then the participants will have been observed for at most 2.5 months from the second dose. We will not know if the vaccine offers protection beyond 2.5 months.
According to the protocol, Moderna "intends" to continue the study after if it receives interim approval. There will be an ethical debate about whether those who received placebos should be given the vaccine should it win approval. Some may oppose withholding a "proven" treatment from these participants. If they are given the vaccine, we lose the ability to measure protection beyond 2.5 months.
Sample Size
If the first interim analysis is successful, then the vaccine would receive approval based on about 50 total infections. Assuming that the vaccine effectiveness is 50%, then we expect half the number of infections on the vaccine arm relative to the placebo arm. So the vaccine arm should account for one-third, or about 17 infections. That's an awfully small number.
Nevertheless, if approved, the vaccine would have met the more stringent statistical significance threshold so the small sample size is not a serious problem. But the required number at the first interim analysis is small enough as to make many statisticians nervous. Continued tracking beyond the first analysis will be key to make sure the early result is not a mirage.
Defining Trial Outcomes
In the primary analysis, the "endpoint" (outcome) is infection, which requires a PCR test confirmation plus some pre-specified level of symptoms.
Some experts are arguing that they should count severe cases only because we don't really need to stop the mild cases. The chosen endpoint includes both mild and severe cases but we expect that most of the cases will be mild so there is concern that the signal (severe) to noise (mild) ratio is not high. Of course, if only severe cases are counted, the trial would last longer.
The opposite criticism is that they are by and large missing asymptomatic cases because almost all the testing are triggered by symptom reporting. One can argue that we need to stop asymptomatic spread because any viral transmission may eventually produce severe cases.
Moderna is taking the middle ground here.
Unusual Sources of Noise
In running any experiment, we'd love to eliminate as many sources of noise (variability) as possible so that any observed differences can be attributed to the vaccine. Some practical issues for this vaccine trial may produce noise, which affects our interpretation of the results.
The FDA permits accommodations due to the pandemic. This means that in-person visits may be replaced by phone calls. That itself is not a problem if all visits are converted to phone calls. The problem is that swaps occur based on request so they'd end up with some in-person visits and some phone calls. This creates a possibility that the method of the follow-up may be associated with, or even cause, different outcomes.
Besides, the protocol lists a variety of tests used to confirm infection. The nasal swab is preferred but saliva and respiratory samples are also used. These tests have different accuracies and characteristics so the mix of test types may affect the count of confirmed cases. For example, people living in high-risk areas are more likely to submit saliva samples by mail versus nasal swabs from in-person visits.
Uncertainty Around the Case Rate
The design assumes a 0.75% baseline case rate within six months but the researchers acknowledge uncertainty around this number - that's why they end the enrollment based on the number of cases rather than the number of participants. What if this assumption is wrong?
In particular, this base rate seems fairly low. Those who hope for "herd immunity" might be shocked to learn that the virus is expected to infect less than 1 percent of a community over six months. At that sluggish rate, when will we reach 70 percent or so prevalence?
All sample sizing requires guessing at the base rate, and sometimes we can get it wrong. If the base rate is somewhat higher than 0.75%, then the trial will get infections faster than expected, and thus end enrollment earlier than expected. The real case rate among those participants can be, say, 2%.
Here's the rub. If 2 percent had been used as the base rate in the design of the clinical trial, statisticians would have stipulated more than 53 infections to trigger the first interim analysis. This means, if the observed case rate is higher than expected, the first interim analysis suffers from insufficient data. Smaller-than-expected sample size makes it more challenging to beat the significance threshold.
***
The Moderna protocol is reasonable: there are some unconventional elements, but in an emergency, we'd have to accept some accommodations. It's still hard to imagine how any of these trials will get to a strong result by end of October or November.
News about the vaccine trials is coming fast. Just this weekend, the Washington Post reported that the FDA may tighten the vaccine approval requirements a little to pacify critics and build trust. I have another post up discussing these modifications.
Comments