In recent months, various professional organizations have called for reducing the use of routine medical screening tests (PSA screening for prostate cancer; mammogram for breast cancer; etc.). These guidelines are informed by statistical analyses.
But a lot of people find the idea of less screening counter-intuitive, even unpalatable. For example, Skip Lockwood, President of the Project to End Prostrate Cancer, lamented:
The whole concept that you would do anything to reduce the amount of information you have does not make sense to me.
There is indeed something of a paradox. What's wrong with screening everybody? I address this issue in Chapter 4 ("Timid Testers / Magic Lassos") of Numbers Rule Your World.
Here's the short version:
Diagnosis tests are not foolproof: some proportion of those testing positive will in fact not have the ailment while the tests will fail to detect some of those who have the ailment. False positives lead to over-diagnosis, unnecessary procedures, and potential harm due to side effects. False negatives give people a false sense of security, and if detected too late, something like cancer may have become too advanced to treat.
By definition, most patients who take a screening test are healthy. Dr. Ablin told us, for instance, only 3% of American men die from prostate cancer. If 97% are not at risk, even a tiny false positive rate, when multiplied by millions of test-takers, will result in a boatload of false positives.
But no test can distinguish between a false positive and a true positive (if we know who is a true positive, we don't need a test, do we?) So these tests produce a boatload of positive results, only a small portion of which are true positives. This is why false positives are a huge problem.
Since a high proportion of those who test positive do not have the disease, what the screening test gives us is unreliable information. And having unreliable information can be worse than having no information.
When a screening test is first rolled out, it is usually recommended for higher-risk populations. If the screened population is subsequently augmented, the newly screenable patients have a lower risk of disease than those meeting the first criterion. In other words, healthy patients are disproportionately added into the screening pool, and a good proportion of these will receive positive test results erroneously. This is why more targeted screening works better than broad-based screening.
Because prostate cancer is even rarer in young men than in older men, if young men are routinely screened, almost all the positive results will be false positives. This is why statisticians want such tests to be more targeted, less broad-based.
This is a complex issue and this is as short as I can make it. For a fuller discussion, read Chapter 4.
I also highly recommend Dr. Ablin's article in the NYT: he focuses on another shoe to drop concerning PSA tests, that is, evidence that PSA screening has no health benefits at all, and that the amount of PSA is not highly correlated with the presence of prostate cancer.
About $3 billion are spent on PSA screening tests annually in the States.
References:
1. "Education should accompany prostate screening, new guidelines say", Thomas H. Maugh II, Los Angeles Times, Mar 4 2010.
2. "The Great Prostate Mistake", Richard J. Ablin, New York Times, Mar 10 2010.
As I understand it, positive screening tests are usually followed by more rigorous diagnostic tests, that is to say tests that are more sensitive than screening tests, right ? Am I also right in assuming that screening tests are designed to have as high a specificity as possible ?
I agree that screening tests should only be administered to "sufficiently at risk" individuals, that is to say that there is probably no need to screen young men for prostate cancer. However, I am left wondering if the problem of false positives is purely economical (that is to say these false positives cost more to society because they have to undergo the more expensive diagnostic tests). Is past information taken into account when analysing the results of a screening test ? Say your first three screenings are negative and your fourth is positive, does that increase the chances that this last result is a false positive ? I would love to read your thoughts on the question.
Posted by: Pierre-Hugues Carmichael | 03/11/2010 at 02:48 PM
PHC: Love the thoughtful comment.
I tried to find some numbers for you. It appears that the PSA test has sensitivity of 80%+ and specificity of 33%. That means, someone who is healthy has a 2/3 chance of testing positive. (This comes from a Hoffman, Gilliland, et. al. Albuquerque study, cited by Prof. Jeff Douglas at Illinois.) This is worse than randomly flipping a coin.
For the PSA test, a negative means something but a positive means nothing so if you had three negatives in a row, I wouldn't be too worried. But... I'm assuming these are tests taken in quick succession because if these tests are taken years apart, there is no reason to believe the health status of the patient has not changed.
Economics is an important consideration since money is a finite resource. But for prostate cancer, we don't need an economic argument. As Dr. Ablin pointed out, clinical trials showed that those receiving PSA screening had the same death rates as those who didn't (both very low).
So, to summarize, this test costs $3 billion a year, tells 2/3 of healthy people they have cancer, and does not help the identified sick people prolong their lives. Pretty damning, huh?
Posted by: Kaiser | 03/12/2010 at 01:13 AM
Positive screening tests (especially mammograms and PSA tests) are followed-up in one of two ways:
1. Wait-and-see: have a repeat test in 3 or 6 months.
2. Biopsy.
Posted by: Jon Peltier | 08/20/2011 at 09:09 AM