With a bit of journalistic magic, I spoke French last week and answered a few questions about vaccine studies. The original piece is here.
***
Here are my answers in English:
What happened to the "real-world studies" arguing that vaccines are almost 100% effective?
To speak plainly, the real world studies have overstated vaccine effectiveness by a lot. After the pandemic, a post-mortem will reveal many methodological problems that are easy to spot, but in the current climate, there is no room - neither in the scientific community nor the public - for such a discussion to take place.
Real-world data are messy, and their analyses require prudent splicing of scientific principles and subjective judgment. To give you an idea, a recently popular technique is called a "test-negative case-control design". Under ideal situations, it should give a reasonable estimate of vaccine effectiveness. The entire edifice relies on tracking people who have come forward to get tested for Covid-19. In the real world, though, some people had a series of test results, which may include positives as well as negatives, and each investigator decides on the rules for which test to count. Moreover, test results are not 100% accurate. We also don't have evidence that the vaccine is identically effective for any arbitrary subset of people.
In the past few months, the methods used in real-world studies have evolved. The first studies deployed well-established, complex statistical methods - such as matching and regression adjustment - that correct for some of the obvious biases because people who are vaccinated differ from people who are unvaccinated in many ways, such as age and ethnicity. More recent studies have largely abandoned those careful approaches, and pushed naive, entirely inappropriate analyses that compare the unadjusted case rates of vaccinated and unvaccinated populations.
As I explained in a recent blog post, this naive methodology commits many errors, such as counting infected people who have taken two mRNA doses as "unvaccinated" cases because they were infected before 14 days after the second dose. This case-counting window, typically set to start 14 days after the second dose, creates a lot of analytical problems. Using this window, analysts remove cases from the vaccinated group but they cannot symmetrically apply the same rule to unvaccinated people, because they did not get any injections, and thus the day of the second shot loses meaning. This is a key difference between the clinical trials and real-world studies. In the trials, unvaccinated people receive placebo shots, and so we know when they reach 14 days after the second shot.
For this and various other reasons, the real-world data must be corrected to get a good estimate of effectiveness. The more recent studies have failed because the analysts have abandoned any pretense to deal with real-world complications. The methods they use are designed for analyzing clinical trial data, which do not have the messiness of real-world data.
These studies have been widely reported in the media and by policy makers. Were we too hasty? Was there a temptation to exaggerate the effectiveness of vaccines "for a good cause"?
The analyses and reports were too hasty but it is understandable due to the evolving emergency. In my book Numbersense, I worried about a Dark Age of science arriving on the back of "big data". With the widespread availability of datasets, and the access to tools, it has become too easy to develop analyses and cram them into science journals. Hundreds of new research papers appear each month on Medxriv. Neither journalists nor citizens like me have enough time to read every study carefully and judge their merits. These papers contain incomplete data and simplified descriptions of the methodologies, making it very hard to follow. Many of my blogs about specific research papers require weeks of preparation.
The reporting in the media has gotten sloppy. How many times have you seen the vaccine trials referred to as "double-blind"? If you pull out any of the study protocols (Pfizer, Moderna, AstraZeneca, etc.), and just read the first page, you'll learn that the vast majority of them were "single-blind". Much of the coverage also ignores the type of research methodology. I've seen reports in which a result coming from a clinical trial with 40,000 participants is mentioned in the same paragraph as another result from a lab experiment with fewer than 100 samples. We can't even get the basic facts right, let alone the results, which demand interpretation.
This state of affairs is unlikely to change because a virtuous cycle has emerged. Media reports love stories about how "perfect" the mRNA vaccines are. Researchers who deliver confirmatory evidence get the headlines and the glory.
Public relations professionals seem to believe that the only way to convince 80 percent of the world to get vaccinated is to paint a black-and-white picture, a strategy that has the ends justifying the means. Among the "truths" they claimed in recent months included 90% protection against symptomatic cases, even higher protection against hospitalizations, and 100% protection against transmission and spread. As I documented in another recent blog, each of these claims has been dented by reality. Unfortunately, this has not resulted in a more nuanced perspective. They simply retreat and replace the debunked claims with new black-and-white truths. I don't want to second-guess the PR industry, as they are the experts in their field.
By trying to do too much (even if it means putting science on the back burner), don't we risk fuelling mistrust of vaccines?
Comments