Here comes the promised second installment to my recent post on anti-doping in which I argue that we should pay a lot more attention to false negatives. Here's the last paragraph:
For me, the difficult question in the statistics of anti-doping is whether the current system is too lenient to dopers. If the risk of getting caught is low, the deterrence value of drug testing is weak. In order to catch more dopers, we have to accept a higher chance (than 0.1%) of accusing the wrong athletes. That is the price to be paid.
Note that this post has been contributed to the Statistics Forum (link), which is a new blog sponsored by the American Statistical Association and edited by Andrew Gelman, and is reprinted here. You can click on the link above or scroll below to read the full post.
In a prior post on my book blog, I suggested that anti-doping authorities are currently paying too much attention to the false positive problem, and they ought to face up to the false negative problem.
This thought was triggered by the following sentence in Ross Tucker's informative article (on the Science of Sport blog) about the biological passport, a new weapon in the fight against doping in sports (my italics):
the downside, of course, is that cyclists who are doping can still go undetected, but there is this compromise between "cavalier" testing with high risk of false positives and the desire to catch every doper.
Kudos to Tucker for even mentioning the possibility of false negatives, that is, doping athletes who escape detection in spite of extensive testing. Most media reports on doping do not even acknowledge this issue: reporters often repeat claims by accused dopers that "they had tested negative 100 times before" but a negative test has little value because of the high incidence of false negative errors.
Unfortunately, having surfaced the problem, Tucker failed to ask difficult questions, opting to take the common stance that the overriding concern is minimizing false positive errors. This attitude is evident in the description of the false negative issue as "the desire to catch every doper", or put differently, the desire to achieve zero false negative error.
In fact, the opposite is true of today's drug testing regime. There is a desire to achieve near-zero false positive errors. The inevitable statistical result of that objective is to admit large quantities of false negative errors. Elsewhere in Tucker's article, he cited a study in which the researchers desired a false positive rate of only 0.1% (arguing that 1% would be too high for comfort). The flip side, which Tucker also reported (without comment), is that the same test picked up only "5 out of 11 doping athletes", which means the false negative rate was over 50%!
Thus, it is unfair to brand the people complaining about false negatives as hoping to "catch every doper"!
***
The implication of high false-negative errors is twofold: (1) the majority of dopers would escape undetected and unpunished so that those testing positive can consider themselves rather "unlucky"; and (2) the negative predictive value of anti-doping tests is low, making a mockery of accused dopers who point to large numbers of prior negative results.
The anti-doping authorities today concern themselves with minimizing false positives, and turn their heads away from the false negative issue. As I explain in Chapter 4 of Numbers Rule Your World (link), there is little hope of reform from within: outside lab experiments, false negative errors are invisible because few athletes would voluntarily disgrace themselves after passing drug tests! (The few cases of admission occurred long after the athletes retired.)
***
What we just discussed are results from lab experiments; what is happening in the real world is likely to be even worse. Any error estimate should be treated skeptically as the best-case scenario.
A useful analogy is the testing of the ballistic missile defense system. At the early state of development, the interceptors are asked to destroy known targets, objects of known number, shape and trajectory launched from known locations at known times.
Similarly, the error rates of anti-doping tests are established by testing athletes in a lab setting, known to be doping, with a known doping schedule, using known compounds at known dosage and known timing. The real-life problem of catching dopers is significantly tougher.
***
For me, the difficult question in the statistics of anti-doping is whether the current system is too lenient to dopers. If the risk of getting caught is low, the deterrence value of drug testing is weak. In order to catch more dopers, we have to accept a higher chance (than 0.1%) of accusing the wrong athletes. That is the price to be paid.
Where do you stand on this?
What's wrong with having two tests, one with a very low false negative, and one with a very low false positive? The second only gets used when the first gives a positive result.
Posted by: Tom West | 04/05/2011 at 09:17 AM
Tom: Interesting idea but if these two tests exist, then you could combine the two indicators into one metric, and create a single test with greater overall accuracy.
Posted by: Kaiser | 04/07/2011 at 12:13 AM
I agree with you about the problem. I do not agree with you at all about the solution.
We have to ask why there are so few cases of positive antidoping test. Is it because doping practices are very rare, or is it because there is a very large number of false negatives?
In my opinion, we have to distinguish between false positive and false negative rates of the antidoping test, and the false positive and false negative rates of the antidoping procedures. While it is reasonable to assume that false positive rates are nearly the same, the difference between false negative rates could be very high. You say nearly the same thing: "What we just discussed are results from lab experiments; what is happening in the real world is likely to be even worse. Any error estimate should be treated skeptically as the best-case scenario." I go beyond saying that the error estimate is worse of one can imagine, because there is a lot of ways to cheat antidoping procedures (control test calendar, swap test-tubes, intake confounding chemical, corrupt or circumvent antidoping official).
Indeed, when accused athletes defend themselves sayng that "they had tested negative 100 times before", they have at least some reason. How is it possible that one doped athlete can pass 100 antidoping test? My answer is that false negative rate is about 99% (and true positive rate is about 1%). I am convinced to be not so pessimist. If the true positive rate were 1%, 100 tests would give one positive result on average.
So, increasing false positive rate from 0.1% to 1% would be completely useless: we would have a test which will be positive with probability 1% irrespective of the presence or absence of doping practices. We may as well draw a numbered ball and decide upon it.
Hence in my opinion the real solution is to work on antidoping procedures to increase the true positive rate. Even if I think that it lacks the political will to handle the matter.
Posted by: Antonio Rinaldi | 04/07/2011 at 06:08 AM
Antonio: You and I are saying the same thing. Increasing the true positive rate is the same as reducing the false negative rate which is the same as increasing the false positive rate (the last because there is a tradeoff between FN and FP).
For the test cited in the article, the true positive rate is close to 50%; it's 1 minus the false negative rate. But this lab estimate is way too high. As I stated in the book, take any Olympics, and you'll find that < 1% of the samples were declared positive. That number is the maximum number of athletes that could ever be caught. Sadly, most people I know believe that the real number of dopers is much higher than that.
Posted by: Kaiser | 04/07/2011 at 08:07 PM
i couldn't agree more tom...combine the two tests
Posted by: josh | 06/30/2017 at 06:08 AM