« FIFA throws weight behind statistical software | Main | Nice summary of coffee studies »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Tom West

What's wrong with having two tests, one with a very low false negative, and one with a very low false positive? The second only gets used when the first gives a positive result.


Tom: Interesting idea but if these two tests exist, then you could combine the two indicators into one metric, and create a single test with greater overall accuracy.

Antonio Rinaldi

I agree with you about the problem. I do not agree with you at all about the solution.
We have to ask why there are so few cases of positive antidoping test. Is it because doping practices are very rare, or is it because there is a very large number of false negatives?
In my opinion, we have to distinguish between false positive and false negative rates of the antidoping test, and the false positive and false negative rates of the antidoping procedures. While it is reasonable to assume that false positive rates are nearly the same, the difference between false negative rates could be very high. You say nearly the same thing: "What we just discussed are results from lab experiments; what is happening in the real world is likely to be even worse. Any error estimate should be treated skeptically as the best-case scenario." I go beyond saying that the error estimate is worse of one can imagine, because there is a lot of ways to cheat antidoping procedures (control test calendar, swap test-tubes, intake confounding chemical, corrupt or circumvent antidoping official).
Indeed, when accused athletes defend themselves sayng that "they had tested negative 100 times before", they have at least some reason. How is it possible that one doped athlete can pass 100 antidoping test? My answer is that false negative rate is about 99% (and true positive rate is about 1%). I am convinced to be not so pessimist. If the true positive rate were 1%, 100 tests would give one positive result on average.
So, increasing false positive rate from 0.1% to 1% would be completely useless: we would have a test which will be positive with probability 1% irrespective of the presence or absence of doping practices. We may as well draw a numbered ball and decide upon it.
Hence in my opinion the real solution is to work on antidoping procedures to increase the true positive rate. Even if I think that it lacks the political will to handle the matter.


Antonio: You and I are saying the same thing. Increasing the true positive rate is the same as reducing the false negative rate which is the same as increasing the false positive rate (the last because there is a tradeoff between FN and FP).

For the test cited in the article, the true positive rate is close to 50%; it's 1 minus the false negative rate. But this lab estimate is way too high. As I stated in the book, take any Olympics, and you'll find that < 1% of the samples were declared positive. That number is the maximum number of athletes that could ever be caught. Sadly, most people I know believe that the real number of dopers is much higher than that.


i couldn't agree more tom...combine the two tests

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)


Link to Principal Analytics Prep

See our curriculum, instructors. Apply.
Business analytics and data visualization expert. Author and Speaker. Founder of Principal Analytics Prep, MS Applied Analytics at Columbia. See my full bio.

Future Courses (New York)

Summer: Statistical Reasoning & Numbersense, Principal Analytics Prep (4 weeks)

Summer: Applied Analytics Frameworks & Methods, Columbia (6 weeks)

Junk Charts Blog

Link to junkcharts

Graphics design by Amanda Lee


  • only in Big Data