The current debate about whether the Pfizer and Moderna vaccine trial results can be used to justify a single-dose treatment reminds me of something I've encountered frequently in the business world. This different context might help some of you grasp the key issues in the debate.
Assume we are running a food box subscription company, e.g. Blue Apron. We like to hold on to as many subscribers as possible. When a subscription comes up for renewal, our marketing team sends a sequence of two emails to entice them to renew. The success metric is lower number of deactivations within 24 days of the renewal date. The first email goes out on the renewal date, with the second email delivered one week later. As the following response curve shows, about half of the eventual cancellations happen within three days after the first email, and deactivations slowed to a crawl soon after the second email is delivered.
We ran a "holdout" experiment (a randomized controlled test), with some percentage of customers withheld from the email campaign completely. As shown below, if the customers do not receive emails (red line), then the total number of deactivations is higher.
The data scientists who analyzed this experiment concluded that the two-touch email campaign reduced deactivations from 20% to 12% by Day 24. They also offered the following chart of relative efficiency, which is based on the ratio of the deactivation rates:
The emails suppressed the deactivation rate by 40% by the 24th day after the renewal date. The effect accumulated over time.
The data scientists' opinion was immediately buried by boisterous chatter from both sides.
***
On one side is the finance team. Pointing to the deactivation curve in the first chart, the CFO pushed to eliminate the second email. The first email did all the heavy lifting, and we save about half the campaign cost by removing the second email. [By the way, this is analogous to skipping the second dose of the Pfizer or Moderna vaccine.]
On the other side is the marketing team. The CMO alleges that the data scientists have undervalued the email campaign. People don't always read emails the day they are sent, so it's unfair to judge their emails from that day. Since it takes five days for people to take action, the test should be re-evaluated from Day 6.
As the chart shows, by starting the analysis on Day 6 rather than Day 0, the relative efficiency is now 50%, which is 10% higher than reported by the data science team. [This is analogous to measuring vaccine efficacy from 7-14 days after the second dose, compared to measuring VE from the day after the first dose.]
***
Rebuttal time.
The marketers object vehemently to the finance team's proposal to trim the campaign to one email. By design, the first email sets the stage, and the second email closes the sale. If the second email were dropped, we would lose all the benefits beyond day 5.
Both sides have in fact speculated on how the deactivation curve would appear should the second email be cut. The marketers attribute everything beyond day 7 to the second email while the finance team attributes nothing to it.
When the scientists were able to speak up, they confess that they don't have any data to support one side or the other. Most likely, the second email has a partial impact, neither full nor zero. This means the true curve for a single email lies between the two blue lines on the above chart. The scientists suggest looking at intermediate metrics, such as how many times and when users clicked on or interacted with each email, which offer circumstantial evidence if subscription decisions beyond day 7 are partly driven by the second email. [This is the debate about one or two doses of the vaccine.]
It's the financial analysts' turn to play opposition, and they dislike leaving the first 5 days out of the evaluation. It's misleading to claim that the emails reduced deactivation rate by 50%. It's only 50% if applied to the list of subscribers starting on Day 6. But about half of the customer attrition happens before Day 6. An improvement of this new metric could not touch any of the deactivations prior to Day 6, by definition. [In the same way, the frequently reported 94% vaccine efficacy ignores any case that arises between the two shots, and only counts cases starting 7-14 days after the second shot. We cannot measure the impact of the vaccine by applying 94% to the number of vaccinated people.]
In the example I concocted, most cancellations occur within the first 7 days and so a metric that starts counting from day 6 ensures that our actions miss the bulk of our target audience, and focus on a minority.
***
The CEO calls time-out. S/he asks the data scientists why the experiment did not contain a third test cell that sends customers just the first email and not the second. The scientists defer to the business teams. They say the third cell was dropped from the test design for expediency. Adding a third cell would have extended the test for another month, and we simply could not afford the wait. They took this action because in the last metric review meeting, the CEO was pounding the table asking for warp-speed actions to avert the declining subscriber numbers.
P.S. I have simplified the test design for these marketing experiments. A so-called "drip" campaign typically contains multiple touch points across multiple channels, which means each experiment involves many test cells. The challenge of these designs is to arrange the test cells so that we can get a clean read of whether a specific touch point on a specific channel affects the performance metric.
[1/22/2021: Early signs coming from Israel, where the vaccination campaign has gone the fastest, appear to support my opposition to single-doses. The Guardian has a write-up about their disappointment. Now, because these analyses are performed using observed data, not coming from a randomized experiment, there is always the possibility that the number of doses is not the reason for the continued rise in cases.
It is all rather messy. I can see that Israel may see benefit in a 50% reduction in transmission quickly, but there are some worrying assumptions. The 52% is presumably obtained by modelling. Infections in vaccinated people may well be from before vaccination or before the vaccine became fully effective. I assume that there infectious disease modellers are looking at that.
I expect that Israel is going to do the second vaccination but 2 months after the first. Presumably Pfizer has looked at that in their early phase trials and it ends up with similar results as 2 or 3 weeks apart. If it doesn't, then they will have problems. A 50% vaccine efficiency is probably sufficient for the original strain, with some mild restrictions and testing and contact tracing, but is not likely to be effective for the new strain. This assumes that close to 100% of the population are vaccinated.
Posted by: Ken | 01/29/2021 at 12:37 AM