Crisis opens the door to wishful thinking. On this door may hang the tag "Operation Warp Speed," the code name for the US government's effort to fast-track a vaccine for the novel coronavirus.
News arrived last week that Moderna is to begin the Phase 3 human trial of its candidate vaccine. I am curious how we are streamlining this process, so I pulled up the document that lays out the design of this clinical trial. (link)
A vaccine trial presents many unique challenges. We cannot ethically inject someone with the coronavirus, especially those who are assigned the placebo at random. Instead, we have to round up people who live or work in high-risk environments, such as hospital staff or communities with a lot of virus. (This presents a different ethical issue, as poorer communities with a high density of minorities have been hit hard by Covid-19.)
Half the trial participants are given the vaccine shot while the other half receive a placebo. It's crucial that no one knows who has gotten the vaccine, and that the assignment to vaccine or placebo is determined by a coin flip. After administering the shots - really two shots a month apart, as Moderna learned that one shot was not enough, the scientists must wait to see what proportions of each treatment arm eventually get infected with the novel coronavirus.
***
The general design is simple enough but the operational challenges are substantial. I'll focus on three issues.
Measuring the outcome
How do we know if and when someone gets infected? Like any clinical trial, there needs to be follow up. Testing a drug such as remdesivir on patients is much easier since the participants are staying at the hospitals. In a vaccine trial, you rely on the participants roaming around the high-risk communities. Each participant must be monitored until s/he gets infected (or till end of trial, whichever occurs first). How frequently are the participants tested for infection? What test is used? How are test results reported to the trial managers?
There's more. The two doses of the vaccine are administered one month apart. How will they deal with people who tested positive after the first dose before receiving the second dose? (In my view, these people should be counted as infected.)
Timing the analysis
When will the data be analysed? This is extremely tricky requiring a lot of guesswork. How long does it take for the vaccine to do its job? We don't even have a solid handle on how many contacts people have each day with infected people, how likely it is to get infected upon contacting an infectious person, how long it takes for an infection to become detected, etc. We also need to have a firm grasp of how much virus there is in these high-risk neighborhoods. One element of any clinical trial design is a specific analysis plan. For example, in Gilead's remdesivir trial, the scientists analyzed the mortality rate at day 15, meaning each patient is tracked for 15 days after they received treatment. There may be multiple "endpoints"; another analysis was planned after 30 days for remdesivir.
Controlling for behavioral changes
Someone might have opted out of attending a birthday party if they weren't participating in the vaccine trial but after receiving the shots, they might decide to risk it. Of course, people will be told not to adjust their everyday life; for a trial lasting many months, that's a big ask. To measure such behavioral change due to trial participation, a third group can be included in the clinical trial, which will not receive an injection at all.
***
Those were three questions I was hoping to find answers to from the clinical trial design documentation. So, what did I learn?
Measuring the outcome
Surprisingly, there is nothing on the follow-up plan in the design document. We don't know what diagnostic test will be used, who will be testing the participants, how frequently are they tested, etc. This is a worrying omisison. Also omitted is how they intend to analyze people who test positive between the two doses.
Timing the analysis
The analysis plan is also largely absent. One hint is that the key metric of the infection rate has an observation window described as "Time Frame: Day 29 (second dose) up to Day 759 (2 years after second dose)". This implies that each trial participant is monitored for up to two years.
The phrase "up to" will prove pivotal. It means Moderna will not declare failure until two years have passed but they may announce mission accomplished much earlier. In fact, the leader of Operation Warp Speed predicted on CNN last night that they could have a working vaccine with over 90% effectiveness by the end of the year.
Such a statement almost defies belief. Enrollment in the trial starts for practical purposes in August. For someone who joins the trial on August 1, s/he gets the second dose on August 29, if the scientists monitor her or him for 60 days, this patient can have an outcome by November 1. No trial can enroll the entire cohort in one day. Assume it takes 30 days to find enough participants. Those who start the trial on August 31 will be ready for analysis on Nov 30. Even if the data analysts work quickly, it will be around Christmas when the first analysis can be reported.
If a vaccine has to be approved and commercially available by year's end, then they are banking on the vaccine showing a benefit within one month of the second dose. Is it reasonable or wishful thinking?
Controlling for behavioral changes
Nothing in the design document addresses this issue. It's highly unlikely they will set up a third test group as that will use up some of the trial participants, lengthening the time they'd need to report results.
***
Not specifying the time points of the analysis is a deeply worrying omission in the trial design. As I explained above, this might reflect a lack of confidence in our understanding of the virus. However, such an omission opens the door to a statistical fallacy known as testing to significance.
The following analysis procedure is the statistician's worst nightmare: every 30 days for up to two years, the Moderna analyst computes the infection rates for the vaccine and the placebo, and tests the difference for statistical significance; if the difference is not significant, the analyst waits another month and repeats the process; if the difference is significant, the analyst declares victory, ending the trial after approval from the review board.
Such a procedure sounds reasonable but it is the source of a lot of spurious statistical findings.
I'm preparing another post to explain this statistical fallacy. What you need to know is that if one waits around long enough, anything can happen. In the context of a clinical trial, if one checks for a statistically significant difference between the vaccine and placebo arms every 30 days for years, at some point, such a difference will appear - even if the vaccine is useless.
In order to prevent such spurious results, statisticians recommend nailing down an analysis plan prior to running an experiment. This is precisely why clinical trials are "pre-registered" - why Moderna is required to submit the trial design prior to the start of this clinical trial. The analysis plan should not be influenced by peeking at the interim results. This possibility is present in the Moderna vaccine trial since the design document is missing some crucial details.
"these people should be counted as infected"
... in a separate category from those who were infected after two doses. More generally, it would be great if the data recorded dates of the two inoculations and date of infection (if any). (... And were made available for independent analysis.)
Posted by: Paul | 08/01/2020 at 09:24 AM