Of placebos and straw men
Sep 25, 2009
Note: This post is purely on statistics, and is long as I try to discuss somewhat technical issues.
(Via Social Sciences Statistics blog.)
This article in Wired (Aug 24,2009) is a must-read. It presents current research on the "placebo effect", that is, the observation that some patients show improvement if they believe they are being treated (say, with pills) even though they have received "straw men" (say, sugar pills) that have no therapeutic value.
The article is a great piece, and a terrible piece. It fascinated and frustrated me in equal measure. Steve Silberman did a good job bringing up an important topic in a very accessible way. However, I find the core arguments confused.
Let's first review the setting: in order to prove that a drug can treat a disease, pharmas are required by law to conduct "double-blind placebo-controlled randomized clinical trials". Steve did a great job defining these: "Volunteers would be assigned randomly to receive either medicine or a sugar pill, and neither doctor nor patient would know the difference until the trial was over." Those receiving real medicine is known as the treatment group, and those receiving sugar pills is the placebo control group. Comparing the two groups at the end of the trial allows us to establish the effect of the drug (net of the effect of believing that one is being treated).
(I have run a lot of randomized controlled tests in a business setting and so have experience interpreting such data. I have not, however, worked in the pharma setting so if you see something awry, please comment.)
Two key themes run through the article:
1) An increasing number of promising drugs are failing to prove their effectiveness. Pharmas suspect that this is because too many patients in the placebo control group are improving without getting the "real thing". They have secretly combined forces to investigate this phenomenon. The purpose of such research is "to determine which variables are responsible for the apparent rise in the placebo effect."
2) The placebo effect meant that patients could get better without getting expensive medicine. Therefore, studying this may help improve health care while lowering cost.
Theme #1 is misguided and silly, and of little value to patients. Theme #2 is worthwhile, even overdue, and of great value to patients. What frustrated me was that by putting these two together, not sufficiently delineating them, Steve allowed Theme #1 to borrow legitimacy from Theme #2.
To understand the folly of Theme #1, consider the following stylized example:
Effect on placebo group = Effect of belief in being treated
Thus, the difference between the two groups = effect of the drug, since the effect of belief in being treated affects both groups of patients.
A drug fails because the effect of the drug is not high enough above the placebo effect. If you are the pharmas cited in this article, you describe this result as the placebo effect is "too high". Every time we see "placebo effect is too high", substitute "the effect of the drug is too low".
Consider a test of whether a fertilizer makes your plant grow taller. If the fertilized plant is the same height as the unfertilized plant, you would say the fertilizer didn't work. Who would conclude that the unfertilized plant is "unexpectedly tall"? That is what the pharmas are saying, and that is what they are supposedly studying as Theme #1. They want to know why the plant that grew on unfertilized soil was "so tall", as opposed to why the fertilizer was impotent. (One should of course check that the soil was indeed unfertilized as advertised.)
Take the above example where the effect on the placebo group was 13. Say, it "unexpectedly" increased by 10 units. Since the effect of the treatment group = effect of drug + effect of believing that one is treated, the effect of the treatment group also would go up by 10. Because both the treatment group and the control group believe they are being treated, any increase in the placebo effect would affect both groups equally, and leave the difference the same. This is why in randomized controlled tests, we focus on the difference in the metrics and don't worry about the individual levels. This is elementary stuff.
One of their signature findings is that some cultures may produce people who tend to show high placebo effects. The unspoken conclusion that we are supposed to draw is that if these trials were conducted closer to home, the drug would have been passed rather than failed. I have already explained why this is wrong as described... the higher placebo effect lifts the metrics on both the treatment and the control groups, leaving the difference the same.
There is one way in which cultural difference can affect trial results. This is if the effect of the drug is not common to all cultures; in other words, the drug is effective for Americans (say) but not so for Koreans (say). Technically, we say there is a significant interaction effect between the treatment and the cultural upbringing. Then, it would be wrong to run the trial in Korea and then generalize the finding to the U.S. Note that I am talking about the effect of the drug, not the effect of believing one is being treated (which is always netted out). To investigate this, one just needs to repeat the same trial in America; one does not need to examine why the placebo effect is "too high".
I have sympathy for a different explanation, advanced for psychiatric drugs. "Many experts are starting to wonder if what drug companies now call depression is even the same disease that the HAM-D [traditional criterion] was designed to diagnose". The idea is that as more and more people are being diagnosed as needing treatment, the average effect of the drug relative to placebo group gets smaller and smaller. This is absolutely possible: the marginal people who are getting diagnosed are those with lighter problems, and thus those who derive less value from the drug, in other words, could more easily get better via placebo. This is also elementary: in the business world, it is well known that if you throw discounts at loyal customers who don't need the extra incentive, all you are doing is increasing your cost without changing your sales.
No matter how the pharmas try, the placebo effect affects both groups and will always cancel out. Steve even recognizes this: "Beecher [who discovered the placebo effect] demonstrated that trial volunteers who got real medication were *also subject to placebo effects*." It is too bad he didn't emphasize this point.
On the other hand, Theme #2 is great science. We need to understand if we can harness the placebo effect. This has the potential of improving health care while at the same time reducing its cost. Of course, this is not so useful for pharmas, who need to sell more drugs.
I think it is not an accident that Theme #2 research, as cited by Steve, are done in academia while Theme #1 research is done by an impressive roster of pharmas, with the help of NIH.
The article also tells us some quite startling facts:
- if they tell us, they have to kill us: "in typically secretive industry fashion, the existence of the project [Theme #1] itself is being kept under wraps." Why?
- "NIH staffers are willing to talk about it [Theme #1] only anonymously, concerned about offending the companies paying for it."
- Eli Lilly has a database of published and unpublished trials, "including those that the company had kept secret because of high placebo response". Substitute: low effect of the drug. This is the publication bias problem.
- Italian doctor Benedetti studies "the potential of using Pavlovian conditioning to give athletes a competitive edge undetectable by anti-doping authorities". This means "a player would receive doses of a performance-enhancing drug for weeks and then a jolt of placebo just before competition." I hope he is on the side of the catchers not the cheaters.
- Learnt the term "nocebo" effect, which is when patients develop negative side effects because they were anticipating them
Again, highly recommended reading even though I don't agree with some of the material. Should have focused on Theme #2 and talk to people outside pharma about Theme #1.
I loved both the article and your write-up here. I think there is one possibility you did not consider sufficiently. Say the British have a higher placebo response than the Australians. You suggest that this is irrelevant because - unless the actual drug effect is shown to vary by culture (or presumably genetic variations tied closely to culture in most of the world) then all that matters is the drug effect itself, which should remain the same once we subtract the placebos in each case.
But what about an ailment in which it is difficult to distinguish levels of improvement beyond a certain threshold - the line at which we say someone is cured. If the British placebo is enough to push them past, or sufficiently close to that line, then a British test would indeed be misleading with respect to the Australian market, and lead to a drug being unnecessarily rejected.
Posted by: Alex | Sep 25, 2009 at 09:29 AM
Nice writeup, I'll make sure to read the Wired article.
I recently read Ben Goldacre, "Bad Science" (http://www.amazon.co.uk/Bad-Science-Ben-Goldacre/dp/000728487X/). It very nicely exposes all the issues with drug trials and has some very interesting examples of placebo/nocebo effects. For example, it turns out that if a dentist administers a placebo, that the patient thinks is anesthesia but the dentist thinks will increase the pain in his patient, the patient actually experiences more pain than if the dentist thinks he's administering anesthesia. Weird! Maybe the patient picks up some subtle behavior in the doctor?
Posted by: Cris | Sep 25, 2009 at 09:52 AM
The arena of this podcast series is a critique of alternative medicines. One episode takes a look at the placebo effect, and offers some studies to consider:
QuackCast 5: Placebo Effect
"SCAM effects are often attributed to the placebo effect. Turns out the placebo effect does not exist. So when the effect of alt.med is equal to placebo effect, it is the same as saying it is equal to nothing. How true, how true."
http://www.quackcast.com/epodcasts/files/bc72f58ee17de43860b4147b9e45cad6-7.html
Posted by: David | Sep 25, 2009 at 10:43 AM
I think the article was suggesting that the differences in placebo response across countries matter because what a drug company is testing is the effect of their drug relative to the effect of placebo. You mentioned an example in which the effect for the treatment group was 15 units and the placebo group 13 - imagine a place in which the placebo effect was 10 units smaller. The drug would now have an effect of 5 units and the placebo 3, i.e., the drug is nearly twice as effective as placebo and probably can be "passed." Those are the kinds of numbers the drug companies would be hoping for if they considered moving a trial location. However, I agree entirely about the importance of Theme #2, since the best treatment research would be trying to improve both placebo and drug response.
Posted by: Lisa | Sep 25, 2009 at 11:00 AM
"If the fertilized plant is the same height as the unfertilized plant, you would say the fertilizer didn't work. Who would conclude that the unfertilized plant is "unexpectedly tall"?"
You would rightly conclude this if the unfertilized plants used to be much shorter in previous trials. If the control group used to consistently grow to 8 inches--but now grows to 10 inches--you have a phenomenon worth investigating.
This kind of effect would indicate that there was possibly something wrong with the methodology for your control group; the conclusions of the study would--at best--be suspect.
Posted by: Jes | Sep 25, 2009 at 11:06 AM
Reiterating Alex's point, the effects a drug have can be bounded by whatever our base health is - if placebo alone has an effect of 15, and placebo + drug has an effectiveness of 25, but our base health is 16, the drug appears to only have an effect of 1. It's possible for a drug to be super-powerful and not show any effects because the placebo effect is ALSO super-powerful.
Obviously for things like anti-cholesterol medication, blood pressure medication, and anything else where a value can change linearly this won't apply (and you would be correct). But for more subjective things like judgements of mental health, it's in the realm of possibility.
Posted by: hegemonicon | Sep 25, 2009 at 11:11 AM
Thanks all for the thoughtful responses. Keep them coming
Alex, Lisa and hegemonicon: Your example works if the drug indeed performs better in one country compared to a different country. To me, that indicates poor test design, and it is a stretch to say the "control group did too well". Also, then the drug should be "passed" in the countries where it shows a large enough improvement, and should "fail" in the countries where the improvement is not sufficient.
Jes: If the base rate has increased from 8 to 10, the increase affects both groups and thus the interesting phenonemon is the general rise in plant heights, not just the rise in heights of the controls.
Some of this is semantics: glass half full or half empty. I's argue that a serious investigator would ask the question neutrally, whether the placebo effect was measured properly without assuming that it is over- or under-estimated. In trials in which drugs were shown to work better than placebo, could it be that the placebo effect is "unexpectedly too small"? Is anyone researching that?
hegemonicon: if the placebo effect is super-powerful, then any drug needs to be even more powerful, otherwise the patient is better off getting the placebo. I see nothing wrong with this but Steve tells us the pharmas do.
Cris: I do read and enjoy Ben's blog.
David: I haven't yet heard the podcast but the person uttering the quotation sounds confused. We don't need to invalidate the placebo effect to conclude that alt.med has no effect; it is sufficient to know that those getting alt.med has the same improvement as those getting placebo.
Posted by: Kaiser | Sep 25, 2009 at 01:19 PM
Regarding the podcast and invalidating the placebo effect, the commentator, in their podcast, refers to a study/studies (my memory is a bit hazy) exploring the placebo effect, the conclusion being that the placebo effect doesn't exist.
References: http://www.quackcast.com/page8/page8.html#5
I thought the study and conclusion were relevant to the topic here, as the opinion here seems to be that the placebo effect is definitely real.
Posted by: David | Sep 29, 2009 at 04:09 PM
Thanks for all of the posts. It makes for great reading. Having worked in the biomed field, I realise that the placebo effect is certainly understudied and potentially a very cost effective way of augmenting traditional treatment. The arguments you outlined against pharmas complaining that 'the placebo effect being too high' hold together very nicely when there are linear changes in the mediators and outcome variables.
Let's take the example of a drug that is designed to lower blood pressure and the risk of heart attack. If the relationship between the change in blood pressure and change in risk of heart attack is linear, then the true effect of the drug does not depend on the placebo effect, because hypothetically it should be the same for both groups. In these cases, pharma companies complaining of the placebo effect being too high are just playing with themselves.
However, if there is a non-linear relationship between the change in blood pressure and change in proportion who suffer heart attacks, this argument may not hold. For example, imagine that there is a law of diminishing returns for lowering heart attack risk as you lower blood pressure. That is, if you lower systolic blood pressure from 200 to 180 (20 points), you may see a 5% reduction in heart attack risk, however if you reduce blood pressure from 160 to 140 (20 points), you may only see a 2% reduction in heart attack risk. If this were the case, then the effects of the drug may appear lower studies with high placebo effects compared to no placebo effects when both the placebo. This of course only holds if both the drug and placebo act on heart attack risk through lowering blood pressure. In this situation, a pharma company may very well wish to no how to reduce placebo effects.
I admit this is an extremely simplified example and the actual figure on blood pressure and heart attack risk are purely fictional. It is just intended as food for thought.
Posted by: AoverT | Oct 08, 2009 at 05:38 AM