SPOTTING BAD SCIENCE 102
SoYouHaveaPill...
This chapter was written by Dr. Ben Goldacre, who has written the weekly “Bad Science” column in the Guardian since 2003 and is a recipient of the Royal Statistical Society’s Award for Statistical Excellence in Journalism. He is a medical doctor who, among other things, specializes in unpacking sketchy scientic claims made by scaremongering journalists, questionable government reports, evil pharmaceutical corporations, PR companies, and quacks.
What I’m about to tell you is what I teach medical students and doctors—here and there—in a lecture I rather childishly call ‘Drug Company Bullshit’. It is, in turn, what I was taught at medical school,1 and I think the easiest way to understand the issue is to put yourself in the shoes of a big pharma researcher.
You have a pill. It’s OK, maybe not that brilliant, but a lot of money is riding on it. You need a positive result, but your audience aren’t homeopaths, journalists or the public: they are doctors and academics, so they have been trained in spotting the obvious tricks, like ‘no blinding’, or ‘inadequate randomisation’. Your sleights of hand will have to be much more elegant, much more subtle, but every bit as powerful.
What can you do?
Well, rstly, you could study it in winners. Dierent people respond dierently to drugs: old people on lots of medications are often no-hopers, whereas younger people with just one problem are more likely to show an improvement. So only study your drug in the latter group. This will make your research much less applicable to the actual people that doctors are prescribing for, but hopefully they won’t notice. This is so commonplace it is hardly worth giving an example.
Next up, you could compare your drug against a useless control. Many people would argue, for example, that you should never compare your drug against placebo, because it proves nothing of clinical value: in the real world, nobody cares if your drug is better than a sugar pill; they only care if it is better than the best currently available treatment. But you’ve already spent hundreds of millions of dollars bringing your drug to market, so stu that: do lots of placebo- controlled trials and make a big fuss about them, because they practically guarantee some positive data. Again, this is universal, because almost all drugs will be compared against placebo at some stage in their lives, and ‘drug reps’—the people employed by big pharma to bamboozle doctors (many simply refuse to see them)—love the unambiguous positivity of the graphs these studies can produce.
Then things get more interesting. If you do have to compare your drug with one produced by a competitor—to save face, or because a regulator demands it—you could try a sneaky underhand trick: use an inadequate dose of the competing drug, so that patients on it don’t do very well; or give a very high dose of the competing drug, so that patients experience lots of side- eects; or give the competing drug in the wrong way (perhaps orally when it should be intravenous, and hope most readers don’t notice); or you could increase the dose of the competing drug much too quickly, so that the patients taking it get worse side-eects. Your drug will shine by comparison. You might think no such thing could ever happen. If you follow the references in the back, you will nd studies where patients were given really rather high doses
references in the back, you will nd studies where patients were given really rather high doses
of old-fashioned antipsychotic medication (which made the new-generation drugs look as if they were better in terms of side-eects), and studies with doses of SSRI antidepressants which some might consider unusual, to name just a couple of examples. I know. It’s slightly incredible.
Of course, another trick you could pull with side-eects is simply not to ask about them; or rather—since you have to be sneaky in this field—you could be careful about how you ask. Here is an example. SSRI antidepressant drugs cause sexual side-eects fairly commonly, including anorgasmia. We should be clear (and I’m trying to phrase this as neutrally as possible): I really enjoy the sensation of orgasm. It’s important to me, and everything I experience in the world tells me that this sensation is important to other people, too. Wars have been fought, essentially, for the sensation of orgasm. There are evolutionary psychologists who would try to persuade you that the entirety of human culture and language is driven, in large part, by the pursuit of the sensation of orgasm. Losing it seems like an important side-effect to ask about.
And yet, various studies have shown that the reported prevalence of anorgasmia in patients taking SSRI drugs varies between 2 per cent and 73 per cent, depending primarily on how you ask: a casual, open-ended question about side-eects, for example, or a careful and detailed enquiry. One 3,000-subject review on SSRIs simply did not list any sexual side-eects on its twenty-three-item side-eect table. Twenty-three other things were more important, according to the researchers, than losing the sensation of orgasm. I have read them. They are not.
But back to the main outcomes. And here is a good trick: instead of a real-world outcome, like death or pain, you could always use a ‘surrogate outcome’, which is easier to attain. If your drug is supposed to reduce cholesterol and so prevent cardiac deaths, for example, don’t measure cardiac deaths; measure reduced cholesterol instead. That’s much easier to achieve than a reduction in cardiac deaths, and the trial will be cheaper and quicker to do, so your result will be cheaper and more positive. Result!
Now you’ve done your trial, and despite your best eorts things have come out negative. What can you do? Well, if your trial has been good overall, but has thrown out a few negative results, you could try an old trick: don’t draw attention to the disappointing data by putting it on a graph. Mention it briey in the text, and ignore it when drawing your conclusions. (I’m so good at this I scare myself. Comes from reading too many rubbish trials.)
If your results are completely negative, don’t publish them at all, or publish them only after a long delay. This is exactly what the drug companies did with the data on SSRI antidepressants: they hid the data suggesting they might be dangerous, and they buried the data showing them to perform no better than placebo. If you’re really clever and have money to burn, then after you get disappointing data, you could do some more trials with the same protocol in the hope that they will be positive. Then try to bundle all the data up together, so that your negative data is swallowed up by some mediocre positive results.
Or you could get really serious and start to manipulate the statistics. For two pages only, this will now get quite nerdy. Here are the classic tricks to play in your statistical analysis to make sure your trial has a positive result.
Ignore the protocol entirely
Always assume that any correlation proves causation. Throw all your data into a spreadsheet programme and report—as signicant—any relationship between anything and everything if it helps your case. If you measure enough, some things are bound to be positive just by sheer luck.
Play with the baseline
Play with the baseline
Sometimes, when you start a trial, quite by chance the treatment group is already doing better than the placebo group. If so, then leave it like that. If, on the other hand, the placebo group is already doing better than the treatment group at the start, then adjust for the baseline in your analysis.
Ignore dropouts
People who drop out of trials are statistically much more likely to have done badly, and much more likely to have had side- eects. They will only make your drug look bad. So ignore them, make no attempt to chase them up, do not include them in your final analysis.
Clean up the data
Look at your graphs. There will be some anomalous ‘outliers’, or points which lie a long way from the others. If they are making your drug look bad, just delete them. But if they are helping your drug look good, even if they seem to be spurious results, leave them in.
‘Thebestoffive...no...seven...no...nine!’
If the difference between your drug and placebo becomes significant four and a half months into a six-month trial, stop the trial immediately and start writing up the results: things might get less impressive if you carry on. Alternatively, if at six months the results are ‘nearly signicant’, extend the trial by another three months.
Torture the data
If your results are bad, ask the computer to go back and see if any particular subgroups behaved dierently. You might nd that your drug works very well in Chinese women aged fty-two to sixty-one. ‘Torture the data and it will confess to anything’, as they say at Guantanamo Bay.
Try every button on the computer
If you’re really desperate, and analysing your data the way you planned does not give you the result you wanted, just run the gures through a wide selection of other statistical tests, even if they are entirely inappropriate, at random.
And when you’re nished, the most important thing, of course, is to publish wisely. If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, which will be obvious to everyone, then put it in an obscure journal (published, written and edited entirely by the industry). Remember, the tricks we have just described hide nothing, and will be obvious to anyone who reads your paper, but only if they read it very attentively, so it’s in your interest to make sure it isn’t read beyond the abstract. Finally, if your nding is really embarrassing, hide it away somewhere and cite ‘data on le’. Nobody will know the methods, and it will only be noticed if someone comes pestering you for the data to do a systematic review. Hopefully, that won’t be for ages.