While responding to PHC's comment on my previous post, I found some data which allows me to make the statistical case for sharply curtailing the use of PSA screening, which is behind Dr. Ablin's argument.
Prostate cancer affects 16% of American men (according to Ablin). If every male is screened, 84% of the screening pool are healthy. Two-thirds of these healthy people will be told they have cancer. In other words, 56% (2/3 of 84%) of the screening pool are healthy people with positive PSA results. Eighty percent of the sick will also test positive, and that's almost 13% of the screening population. Adding those two, we expect 69% of the screening population to be told they have cancer but only 13% in fact have cancer. If you are one of these testing positive, it is very hard to figure out if you have cancer or not.
The number that trips up this test is the 56% healthy people who test positive. This can be sharply reduced if the screening test is limited to high-risk males. Let's say, of the high-risk patients being screened, 50% have cancer. Then, the 56% number becomes 33%. Still not that good, but much better.
***
Note that I am not saying screening tests are useless. They are useful if we use them properly. The paradox is that the more people put into the screening pool, the worse the test performs. The reason is that putting in more healthy people creates more false positives, which reduces the value of the positive result because most of those who test positive will not have cancer.
A test that gives two thirds false positives is not a very accurate test, and is not very useful at all. I cannot believe they would tell these 69% of people they have cancer. Are you sure these numbers are correct?
This particular test doesn't help your argument much, because it's worthless no matter what the a priori probabilities are. There are plenty of examples for screenings that produce more false positives than true positives, and that actually use accurate tests.
Posted by: Cris Luengo | 03/12/2010 at 09:45 AM
Cris, note that the 69% is a worst-case scenario as I'm assuming that every male gets screened but there is no way around it because the specificity is so low.
Posted by: Kaiser | 03/12/2010 at 09:51 AM
You also have to figure in what you expect people who get positives to do. In this particular case, there are a number of men with prostate cancer for whom the treatment is no treatment, because the cancer is so slow-moving and the men are so old.
So if you take these men out of the pool, then what happens?
Posted by: John | 03/12/2010 at 11:30 AM
I was really surprised by that very low specificity. It would be interesting to see if that's the norm for all screening tests or just the sad story of the PSA. I also like John's comment about what happens after the screening results are revealed.
But again, do doctors rely simply on the screening test to give a diagnostic ? I personally wouldn't give much credence to a screening result until I passed a more official (though potentially more expensive and/or invasive) diagnostic test.
On a lighter note, I find it hilarious that PSA stands for Prostate-Specific Antigen, while the specificity of the test is really very poor.
Posted by: Pierre-Hugues Carmichael | 03/12/2010 at 12:17 PM
"56% healthy people who test positive"
If you flip a coin, you'll only have 50% false positives.
Posted by: Jon Peltier | 03/12/2010 at 12:31 PM
John and PHC: Anyone testing positive in a screening test should go for a confirmatory test, as you suggested. Whatever that test is, it also has its own false positive rate. In addition, there must be a reason why that second test is not used as a screening test: as PHC pointed out, it's probably because it is very expensive, or very invasive.
Jon: I said the same thing in my reply to PHC. A specificity of 33% means 2/3 of the time, a healthy person will test positive. If you flip a coin, only 1/2 would test positive. But this is not quite right because in reality, we don't know who's healthy and who's not.
Posted by: Kaiser | 03/13/2010 at 12:38 AM
Why not continue with an application of Bayes rule?
Pr( c | + test ) = Pr( + test | c )Pr( c )/[Pr( + test | c )Pr( c ) + Pr( + test | ~c )Pr( ~c )]
= .8*.16/[.8*.16 + .67*.84] = .185
Which is to say, the probability of having cancer given that you get a positive test result is about 1/5.
Posted by: noahpoah | 03/13/2010 at 08:35 PM
Noahpoah: Thanks for computing the positive predictive value (PPV).
It shows that the PSA test is mildly useful, i.e. better than random. Without the test, 16% of the population have cancer ("the prior"). Given the positive test result, 18.5% of those have cancer.
Posted by: Kaiser | 03/14/2010 at 01:19 AM
Kaiser,
I hadn't thought of it that way. I was thinking in terms of how the bad specificity of the test and relative rarity of the disease make a positive test result fairly useless, maybe even damaging, given the stress that thinking you have cancer produces, the cost and possible complications of further, possibly more invasive tests, etc...
Posted by: noahpoah | 03/14/2010 at 11:16 AM
Noahpoah: No doubt your conclusion is the right one, and I agree with it too. I brought the other interpretation to show that whoever adopted this test wasn't completely nuts. The test is better than random selection but not much better.
Posted by: Kaiser | 03/14/2010 at 12:25 PM