« How not to critique models | Main | It takes (a lot) to build a village »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Trent McBride

You might to correct the spelling of "prostrate" to "prostate". Feel free to delete this comment.

Trent McBride

I am a pathologist with an interest in prostate cancer. A few thoughts:

- You write: "Of the men who had three or four rounds of screening, the false positive rate was 12 to 13 percent." I think this means that among men who don't have prostrate cancer, 12 to 13 percent will be told they have cancer three or four consecutive times! This level of inaccuracy is quite astounding."

Nobody is ever told they have prostate cancer from the screening test; a diagnosis is only made from a biopsy read by a pathologist, and the accuracy approaches 100%. I haven't read the study, but what that quote indicates to me is that this is the rate of men who had a false positive in at least one of 3-4 PSA tests, not necessarily all 3-4. And regardless, these "false positives" are followed clinically or biopsied, they are definitely not walking around with a cancer diagnosis.

- They must have updated the Reuters article because the Lepor quote is now different. I can't judge the context of the original quote, but his quotes now are much more defensible.

- My personal bias is against the PSA test (even though I professionally benefit from it), but I would have to agree Dr. Lee from the article - by the USPTF's own grading scheme, their recommendation of "D" seems prematurely low, based on my browsing on the major PSA screening studies. Of the 3 large studies mentioned in the article, the smallest and the largest find a moderate statistically significant mortality reduction. The one in the middle finds none (there was a mortality reduction but was not statistically significant). If you read the USPTF scheme, a D means they definitely recommend against the test, but the above results seem to point more to a C by their own definition, meaning inconclusive. Now, just because test benefits were found does not mean that the test is worth it when you consider test costs and risks of treatment, but the USPTF specifically does not consider test costs and it seems hard to definitively balance test risks (they do attempt to do this).

- Still, at the end of the day, the absolute benefit of screening, even if the large European study which found benefit is correct. The number needed to screen to decrease one death is about 1500. That is a high number, given all the costs and risks of the test. My lifetime risk of dying from prostate cancer is about 2.8%. Given the numbers above, if I choose to be screened it is reduced to about 2.2-2.3%, if I am calculating the numbers correctly. I am personally not convinced that is enough difference to make me care much either way.

Ken

So the biopsy will find the cancer correctly, but the problem is then whether it would be a problem during the patients lifetime. It it is not, then the patient will have a prostatectomy, with all the complications, without any benefit. No one will ever know, what the outcome would have been.

It seems certain that intervention does save some lives, but that is to be expected. The treatments are unlikely to increase mortality but in the correct cases will decrease mortality. We could give all men at age 50 a prostatectomy, and it would save lives but decrease quality of life for a high proportion. What is really needed from the studies is to identify men at highest risk, so only they can be treated.

Kaiser

Trent and Ken: thanks for the informative comments.

The Best Colleges

We wanted to let you know that your blog was included in our list of the top 50 statistics blogs of 2011. Our goal was to highlight blogs that students and prospective students will find useful and interesting in their exploration of the field.

You can view the entire list at http://www.thebestcolleges.org/best-statistics-blogs/

Congratulations!

Klaus

Regarding your following argument:

"Critics say the studies were not long enough to show a benefit." Let us do a thought experiment. We start screening men from 21 years old. [...] This "false screening" will also identify most of those afflicted."

I think you may have misunderstood the argument of cancer screening proponents. The question is not whether a longer period of screening detects more cancers (in that case your thought experiment would apply).

Rather, the argument of screening proponents is that patients need to be followed for a longer time AFTER a cancer diagnosis was made, in order to find out if screening leads to a longer survival of patients (as compared to those where the cancer was found incidentally). Since prostate cancer often grows very slowly, it may take many years of treatment before a difference becomes apparent. There can be all kinds of bias in these analyses. For example, if screening detects a cancer two years earlier than it would become apparent otherwise, then the screened patient may "appear" to live two years longer -not because the treatment is better but just because the diagnosis is made earlier. These things need to be carefully controlled for if one wants to assess screening benefit. However, the argument that observation time may be insufficient to show a true benefit of screening may be valid. .

Kaiser

Klaus: your interpretation would make sense if the trials were to have shown benefit and we want to know if such benefit is real or illusory. But the "critics" here are reacting to studies that show no benefit. In any case, the studies already use a 10-year follow-up period so I don't see an issue there.

The main takeaway for me for the PSA test controversy is that screening is not a bad thing but we need a much more accurate test before we should ask healthy men to get screened.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Marketing and advertising analytics expert. Author and Speaker. Currently at Vimeo and NYU. See my full bio.

Next Events

Aug: 20 DataViz New York Meetup

Aug: 26 Optimizely Experience, Invited Expert, New York

Past Events

See here

Junk Charts Blog



Link to junkcharts

Graphics design by Amanda Lee

Search3

  • only in Big Data

Community