A lot of hot air has been expended on homeopathy’s apparent inability to “prove” itself in clinical trials. Many people, many of whom call themselves scientists, seem only to need this fact, plus the therapy’s apparent implausibility, to jump to the conclusion the whole thing is nonsense on stilts, and work themselves up into lathers of righteous indignation about the fact that it continues to be practiced. I’ve gone into this a couple of times in the comments to posts on this blog, but this question really does deserve detailed examination, because the issue is not at all as simple as it might seem.

The presupposition of clinical trials is that there is a stable, locally active cause that is only active in the treatment group, irrespective of blinding and the circumstances of the trial or any changed clinical context as a result of the trial. In plain English, this means that the whole basis of clinical trials is predicated on the assumption that the bulk of the treatment effect resides in the physical substance that’s being trialed. It’s a localist hypothesis, proceeding — in homeopathy’s case — from the following logic:

The localist hypothesis of homeopathy is intuitively appealing, simple, and the most straightforward. It presupposes that:
1. Homeopathy works; we know this from clinical practice. This is conceded as a heuristic starting point.
2. If homeopathy works, it cannot be molecules that are the active principle, because at high homeopathic potencies they are statistically too few to be biologically active. This is a logical extrapolation from the known body of biomolecular knowledge.
3. If molecules are not the active principle, it must be something else that is fixed to or in the remedy, and hidden from ordinary analysis for lack of sufficiently sensitive instruments, or theory, or both.
It is a localist hypothesis because it presumes the active principle has to be a local resident to the remedy. It is construed as residing in the material substance.

Walach, H. Reinventing the Wheel Will Not Make It Rounder: Controlled Trials of Homeopathy Reconsidered. Journal of Alternative and Complementary Medicine.,Vol 9, No 1, 2003, pp7–13

There’s actually no logical reason why this should be so. Think about it. The rationale behind the assumption is based on a habitual way of thinking arising from acclimation to the parameters of the pharmaceutical model of intervention. Pharmaceutical interventions are based on the mid to high dose ranges of the dose-response curve. Even allowing for the Arndt-Schulz law (now renamed hormesis), homeopathic remedies are clearly right off that scale. Why should we then presume they behave in the same way? And can be tested accordingly?

Experienced homeopathic researchers have made exactly this point:

Thought experiment illustrating the efficacy paradox

Imagine the following situation as depicted in the figure: Let there be two treatments x and y for the same condition, say chronic pain. Let there be two placebo controlled RCTs with comparable patient populations. In every one of these trials we will have measurement artefacts caused by unreliability of measures; let them be equal in all groups. In every one of these trials, we will also have regression to the mean as a statistical artefact and as a result of the natural course of the disease studied; some patients will improve regardless of the treatment applied. Then there will be nonspecific treatment effects: Patients expect to get better when treated, especially in a trial. Hope will work against the general demoralization caused by disease. The attention of doctors and nurses within the context of a trial and perhaps the special attention paid to patients within the context of a particular CAM intervention such as homeopathy, healing, or acupuncture, will also contribute to the nonspecific part of improvement. Let us not forget that a treatment that can help patients to understand their suffering by providing an explanation, a common explanatory myth, is a therapeutic factor, too (Frank, 1989). And then there will be specific factors of treatment. Let us assume that treatment y is specifically effective. Its specific efficacy will be 20%, which, in a trial that is adequately powered, will be significant. Thus, everybody will conclude: Treatment y is an effective treatment for chronic pain. Treatment x only has 10% specific efficacy and let us assume that studies of treatment x are generally underpowered to find this effect. Everybody will conclude: Treatment x is an ineffective treatment for chronic pain. What usually is overlooked is the fact that the nonspecific treatment effects of treatment x are much larger. In the thought experiment, I have chosen them to be 30% for treatment x. For treatment y, they would only be 5%. In such a case treatment x, although overall much more powerful with 70% of patients potentially benefitting from it by virtue of its strong nonspecific effects, would be neglected in favor of treatment y, with 55% of patients benefitting from it, because y has a stronger specific treatment effect.

I maintain that this situation is frequently true for CAM therapies. Studies are often underpowered, eg., for acupuncture, and thus potential specific effects are overlooked. The conclusion of reviewers and the educated public then is the verdict “inconclusive evidence” (Ezzo, et al., 2001), and the political consequence, as just happened in Germany, is the decision to not include acupuncture in the scheme for public reimbursement, because the evidence for specific efficacy is inconclusive (Bundesausschuss Ärzte und Krankenkassen, 2001). However, nobody pays attention to the fact that perhaps the magnitude of nonspecific effects makes a treatment effective and not the specific effects. An even more complicated situation can arise when the circumstances of a trial, such as blinding and changing the natural flow of patient–doctor interaction and treatment sequences, change the context of a treatment dramatically and thus alter the potential nonspecific effects in a detrimental way. This can happen in blinded trials of homeopathy, in which insecurity arises from the blinding of doctors, and also in trials of acupuncture, when blinding procedures make it necessary that the doctor who is taking the case and making the assessment is different from the person who is administering the treatment. In all such cases, trials may alter the context of a treatment and thus diminish potent nonspecific factors and thereby underestimate effectiveness.

Walach, H. The Efficacy Paradox in Randomized Controlled Trials of CAM and Elsewhere: Beware of the Placebo Trap. Journal of Alternative and Complementary Medicine, Vol 7, No 3, 2001, pp213-218

This thought experiment of Walach’s models the experience with testing homeopathy and other CAM therapies very well. Data from clinical studies consistently suggest a positive treatment effect of around 70% for homeopathy, yet repeated attempts to replicate these results in controlled trials have failed. It therefore becomes necessary to explain this discrepancy, rather than assume clinical results, especially in such large cohort studies as have been undertaken, can be put down to such notions as “bias”, “regression to mean” and “placebo response”.

It still remains, as Kate Chatfield has remarked:

… if homeopaths can facilitate a placebo-induced healing response in over 70% of people who visit them, many of whom have previously not been helped by various types of allopathic intervention, then surely homeopaths should be highly revered and re-labelled ‘miracle workers’.

Chatfield, K. In Pursuit of Evidence.

As I’ve argued repeatedly, the phenomenon that is homeopathy can’t simply be written off. We need to re-examine the assumptions underlying trial design and look at other ways of satisfactorily evidencing efficacy.

Advertisements