Two recent and related papers, published in the Journal of Clinical Epidemiology and Homeopathy (the journal of the Faculty of Homeopathy, the UK’s professional organisation of medically qualified homeopaths), have reconstructed the analysis carried out by the authors of The Lancet’s much vaunted 2005 meta-analysis, on the back of which the journal triumphantly editorialised “the end of homeopathy”, and have placed on record the fact that the study was hugely flawed and in some instances just plain incorrect.
These papers emphatically underline the position this blog has taken from the outset — that the underlying data does not support the assertion that homeopathy is no more than placebo. The jury is still out, and those that claim otherwise are misrepresenting their personal opinion as proven scientific fact when it’s nothing of the kind.
Dutch homeopathic physician Lex Rutten, working with colleague C F Stolper and statistician Rainer Lüdtke, has exhaustively analysed the data used in the meta-analysis. (It’s worth noting here — for those that aren’t already aware of the fact — that much of the underlying data for the study was only provided some months after publication following outcry from both homeopathic and conventional physicians and researchers alike, and that the study’s original publication violated The Lancet’s requirements for transparency. In other words, had The Lancet followed their own rules, the study should not have been approved for publication in the first place.)
In the paper for Homeopathy (Rutten, A L B & Stolper, C F. The 2005 meta-analysis of homeopathy: the importance of post-publication data. Homeopathy (2008) 97, 169–177), Rutten and Stolper set out to answer the following questions:
What was the outcome of Shang et al’s predeﬁned hypotheses?
Were the homeopathic and conventional trials comparable?
Was subgroup selection justiﬁed?
The possible role of ineffective treatments. Was the conclusion about effect justiﬁed?
Were essential data missing in the original article?
Results: The quality of trials of homeopathy was better than of conventional trials. Regarding smaller trials, homeopathy accounted for 14 out of 83 and conventional medicine 2 out of 78 good quality trials with n < 100. There was selective inclusion of unpublished trials only for homeopathy. Quality was assessed differently from previous analyses. Selecting subgroups on sample size and quality caused incomplete matching of homeopathy and conventional trials. Cut-off values for larger trials differed between homeopathy and conventional medicine without plausible reason. Sensitivity analyses for the influence of heterogeneity and the cut-off value for ‘larger higher quality studies’ were missing. Homeopathy is not effective for muscle soreness after long distance running, OR = 1.30 (95% CI 0.96–1.76). The subset of homeopathy trials on which the conclusion was based was heterogeneous, comprising 8 trials on 8 different indications, and was not matched on indication with those of conventional medicine. Essential data were missing in the original paper.
The authors conclude:
A review of data provided after publication of Shang et al’s analysis did not support the conclusion that homeopathy is a placebo effect. There was intermingling of comparison of quality and comparison of effects, and thus matching was lost. The comparison of effects was also ﬂawed by subjective choices and heterogeneity. The result in the subgroup from which the conclusion was drawn was further inﬂuenced by the choice of cut-off value for ‘larger’ trials. If we conﬁne ourselves to the predeﬁned hypotheses and the part of this analysis that is consistent with the comparative design, the only legitimate conclusion is that quality of homeopathy trials is better than of conventional trials, for all trials (p = 0.03) as well as for smaller trials with n < 100 (p = 0.003).
Rutten and Stolper’s comments on cut-off values for sample size are particularly telling:
Cut-off values for sample size were not mentioned or explained in Shang el al’s analysis. Why were eight homeopathy trials compared with six conventional trials? Was this choice predeﬁned or post-hoc? Post-publication data showed that cut-off values for larger higher quality studies differed between the two groups. In the homeopathy group the cut-off value was n = 98, including eight trials (38% of the higher quality trials). The cut-off value for larger conventional studies in this analysis was n = 146, including six trials (66% of the higher quality trials). These cut-off values were considerably above the median sample size of 65. There were 31 homeopathy trials larger than the homeopathy cut-off value and 24 conventional trials larger than the conventional cut- off value. We can think of no criterion that could be common to the two cut-off values. This suggests that this choice was post-hoc.
The knee-jerk sceptical response is likely to point out that the authors are homeopaths and they would say that wouldn’t they? But the authors restrict themselves to an uncontentious and easily verifiable critique of Shang et al’s data and analysis and make no conclusions one way or the other about what the data are saying about homeopathy.
They conclusively demonstrate that for the subset of 21 high quality homeopathic trials (as defined by Shang et al), a positive or negative conclusion for homeopathy is crucially dependent on the exact number of trials selected. Re-running the data using different cut-off values for sample size indicated that all but 3 of 20 possible cut-off values lead to a significant effect for homeopathy if all higher quality trials are considered, more in line with the results of 5 earlier meta-analyses of homeopathic trials. A firm positive conclusion is found, for example, merely by omitting four trials that showed Arnica is ineffective for muscle soreness after long-distance running, a condition for which neither homeopathic nor conventional treatment provided any relief (and which one could argue hardly constitutes a medical condition in the first place, being a perfectly natural and inevitable consequence of abnormal exercise).
In the Journal of Clinical Epidemiology paper (Lüdtke, R & Rutten, A L B. The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials. Journal of Clinical Epidemiology 61 (2008) 1197-1204), Lüdtke and Rutten conclude:
Our results do neither prove that homeopathic medicines are superior to placebo nor do they prove the opposite. This, of course, was never our intention, this article was only about how the overall results and the conclusions drawn from them change depending on which subset of homeopathic trials is analyzed. As heterogeneity between trials makes the results of a meta-analysis less reliable, it occurs that Shang’s conclusions are not so deﬁnite as they have been reported and discussed.
What does all this mean in plain English? It’s a pretty good bet the study was deliberately skewed to support the initial presumption that homeopathy equates to placebo.
As Einstein once remarked “Not everything than counts can be counted; and not everything that can be counted counts.” Or perhaps we could go one step further. That a prestigious journal such as The Lancet should base an editorial and extensive publicity campaign passing judgement on an entire therapy on a study of such dubious quality which violates its own publication guidelines is more in line with Wordworth’s assertion: “Science appears as what in truth she is; not as our glory and absolute boast, but as a succedaneum, and a prop to our infirmity.”