Andrew Gelman linked to this great reporting by Reuters on U.S. healthcare economics. It's a must-read. Be patient, and read through to the end even though it's a long piece.
Andrew cites statistician Don Berry who explains what "lead time bias" is, and why survival time is always the wrong metric to use in evaluating health outcomes. Survival time is the time from diagnosis to death. By doing more screening and diagnosing earlier, survival time will magically increase even if the patient's life expectancy stays put.
***
I ignored Andrew's warning and spent some time reading the Philipson, et. al. paper (link). Time that I want back but couldn't. To save you the trouble, I will discuss a few gaping holes other than the howler already identified by Berry - there are many other less significant issues.
The title of the paper purports to address the following "causal" relationship:
The reader immediately discovers that the authors analyzed a different "causal" relationship:
It may appear that the substitutions are harmless: spending on cancer care is a proxy for overall healthcare spending; survival gains for cancer patients is a proxy for overall health benefits. The authors hid the useful information in the Appendix (available online). In Table 3, we learn that spending on cancer care is only a single-digit percentage of total health care spending in almost every country. Besides, the total deaths by the 13 types of cancers counted in their study constitute only 31 percent of the total cancer deaths in the U.S. (using the 2011 statistics from this report - PDF). This list of included cancer types excludes the biggest killer (lung, over 150,000 deaths) while it includes testis which caused 350 deaths in 2011.
So, even if the analysis is correct, the result cannot be generalized to talk about cost and benefit of all health care spending. This is an instance of "availability bias": even though cancer makes a lot of news, most health care spending has nothing to do with cancer, and so we can't use cancer care spending as a proxy.
***
In assessing the value of cancer care spending, the authors decided to use a modeled change in death rates, rather than the actual observed data. Neither in the paper nor in the appendix is the actual model reported, nor is there any information on goodness of fit. However, we don't need to know the model to know it doesn't fit.
Take a look at the fourth column of Table 1 in the Appendix. This column shows the predicted deaths avoided or incurred in the U.S. (given the additional spending in the U.S. relative to "Europe").
Let's do a sanity check on these numbers. For colorectal cancer, the model claims that the extra spending has avoided 282,000 deaths over the 23 years (1982-2005), or roughly 12,300 deaths per year. According to the cancer death statistics, about 50,000 deaths from colon cancer actually occurred in the U.S. in 2011. That means the model claims that colon cancer deaths would have been 25% higher were it not for the extra spending. What is the miracle drug that caused this gigantic improvement? What prevents this amazing new treatment from crossing the Atlantic?
Maybe you believe in miracles. Then, take a look at stomach cancer. Here, the negative number seems to imply that the additional spending has induced 225,000 stomach cancer deaths over 23 years. That sounds really horrifying. Given that stomach cancer killed 10,300 Americans in 2011, the model claims that the extra spending has doubled the number of deaths from stomach cancer!
Simply put, their model makes no sense.
***
Now, go back to Table 3 in the Appendix and read the note. It says that missing data for percentage of health care spending that is cancer related is imputed as 6.5% (30% higher than the U.S. assumption of 5% which came from a totally different source), and we find that Iceland, Norway, Slovkia and Slovenia (40% of the countries) are all imputed.
The problem here is that the authors are not consistent in their treatment of missing data. In the main paper, they explain again and again that their sample of data is restricted by data availability (i.e. they didn't impute values for missing data). For example, the choice of the 10 European countries is because "only ten reported data consistently over the 1983-99 period". This means no Italy, no Spain but you have Wales and Scotland (but no England), and also Slovakia and Slovenia (why are they comparable to U.S.?).
Why those particular 13 cancers? Because "data were consistently available from both the European and US survival databases". This means including testis cancer and excluding lung cancer. Insteading of imputing values for lung cancer, they just drop the cancer type that causes the most deaths.
Why look at survival differences only for patients diagnosed from 1995 through 1999? You guessed it. It's because only in those periods can they find consistent data.
Given that they use models throughout the research, and they imputed values for proportion of spending on cancer treatment, they could have tried to impute values in these other decisions, and then the result could perhaps be generalized.
Dropping data because some variables are missing should be justified clearly. It's too easy to cherry-pick your dataset this way.
***
How about another non-sensical assumption? The average value of an additional year of life of someone who's dying from cancer is $150,000 to $360,000. They describe this as "standard figures for an extra year of life" and call the lower end of the range "conservative". Only 5% of Americans earn over $100,000 per year. The median personal income is less than $40,000. (From Wikipedia, for 2004, I think). Enough said.
***
It's sad that this paper gets publicity only because it makes a conclusion that is against "conventional wisdom". The clear evidence so far has been that while the U.S. spends twice as much on health care as other "wealthy" nations, our life expectancy is lower, and the bottom of the class. (See here, for example.)
The chart shown on the right is as clear as it can be. (I discussed this chart on Junk Charts.) The situation with science journalism is very dire, in my opinion, when outlets are chasing clicks and sales by publicizing bad studies that have eye-catching headlines.
Recent Comments