When I turn on the media and listen to the media hyperventilating about Covid, it's no different from turning on CNBC, listening to Jim Cramer etc. lecturing us on why the stock market went up or down yesterday. If cases went down, it's because of the vaccine. If cases went up, it's the variants. We can write down all the usual suspects: vaccine, lockdowns, variants, holidays, travel, people behaving badly, masks, schools, treatments, parties, dining outdoors, dining indoors, etc. Depending on what happened yesterday, the host picks one from the box. Medical science reporting has descended to the level of stock price watching.
So, I was immediately attracted to Andrew Gelman's recent account of science as "a harbor clotted with sunken vessels," in which he lamented the current state of scientific publications that elevates headline-grabbers, failing to self-correct when such studies subsequently prove to be bogus.
The problem is much broader, and it's about to hit medical research in a huge way, once the pandemic is in the rear-view mirror. The emergency situation has given cover to an ocean of pre-prints and warp-sped refereeing, in which headline-grabbers have floated to the top, without a clear path for future researchers to navigate through this sea of debris. Many - if not most - of these studies will eventually be debunked.
***
In his post, Gelman cited the example of research on the effect of neuropeptide oxytocin (OT) on trust between humans. A seminal paper published in 2005 claimed to have found an association between the two. The original experiment failed to replicate, and by now, it should be clear that the association has not been proven. And yet, Prof. Gelman found that (a) the citations of the original paper are still multiple times larger than those of the replication studies; and (b) the original paper continues to get new citations. He just did a casual Google search so we don't know if those who cited the original research also cited the refutations - I doubt it.
If science advances by cleansing bad ideas as better ones emerge, this situation is not putting it in a good light. This situation is due to a host of reasons. One can be confirmation bias - the researchers are looking for anything that supports an assumption of theirs, so they can move on with the rest of their study. Another reason is professional misconduct, or socialized behavior of a subculture.
For me, the problem is also rooted in the practice of scientific publication. It's too damn hard to find those follow-up studies! The following is a diagram of a scientific paper which links to past papers through the device of "references".
By definition, a reference is a link to the past. So a replication study, or for that matter, any kind of follow-on work, will not be found in the references, nor in the original paper. In this system, it's a chore to discover future papers that reference a given paper. Modern search engines attempt to solve this problem by the device of "citations". From the perspective of the cited paper, a citation is a link to a future paper that lists it in the references.
Unfortunately, the citation device is extremely noisy. Most of the papers that cite the original paper do not evaluate the merits of the paper. Citations are tracked as a means of measuring the impact of a given paper - and this has led to gaming. Citations are given to appease referees, or to elevate colleagues, or any number of reasons.
Buried deep inside a long list of citations may be a few replication or follow-up studies. These are future studies that directly address the validity of the original paper.
Adding to the mess, one expects follow-up studies to be published long after the original publication. Citations that are not follow-up studies are both more numerous and also sooner to arrive than follow-up studies.
Researchers may then rely on meta-analyses or survey studies. If they exist, these studies are easy to locate. But they are published far, far, far into the future relative to the original study. They also require the existence of a prior body of follow-up or replication studies.
The end result is that follow-up studies take a lot of effort to find.
Nevertheless, if there is a will, there is a way. Follow-up studies can be marked or registered. And they could be automatically attached by search engines to the original paper, just like citations, but as a separate tag.
Why should we care? Because in science, truth matters.
***
This timing problem affects other fields as well. I published a letter to the editor once about a New York Times Magazine article. But if you read that original article, you're not going to find my letter. There are no links to the future, only to the past.
Comments