The use of sham or placebo controls in manual medicine research

CHAPTER 11 The use of sham or placebo controls in manual medicine research




Introduction


One of the key problems facing practitioners of manual therapy is to show that their treatments produce beneficial changes in their patients. Of course, practitioners of manual procedures are not alone in this, as it has long been recognized that proving the benefit of medical techniques in general, whether they be drugs, surgical procedures, or psychotherapies, is difficult. Indeed, it is only relatively recently in the history of medicine that true experimental designs have been developed to attempt to test the results of medical treatments. Perhaps the first skepticism regarding medical procedures began in the late 1700s, with doubts over such practices as ‘mesmerism’ (animal magnetism) and ‘homeopathy’ (Kaptchuk 1998). In the early 1780s what appears to be the first ‘blind’ test of a medical procedure, mesmerism, was used to determine whether subjects who could not see where the ‘mesmeric energy’ was being applied could tell the area of application. Women were either blindfolded or not blindfolded during the magnetic application (a magnet near the body surface). When they could see the application, the reported sensations were at the point of application, whereas when blindfolded, the reported sensations did not correlate with the site of application (Kaptchuk 1998). However, until the early 1940s, little was done to verify any sort of treatment effectiveness for the many ‘drugs’ and procedures in common use. Thus, practices such as purging, bloodletting, puking, leeching, and various surgeries continued to be used. It was not until much later, in the mid to late 1940s, that the need for some sort of proof of result became evident. The advent of antibiotics led to the beginning of the current era of medical experimental design, in it was recognized that several design factors were necessary to improve the credibility of study outcomes. With the development of the first antibiotics came the need for rigorous experimental methodology to determine the effectiveness of the new drugs. Thus, rigorous experimental pharmaceutical research design arose along with the growth of the modern pharmaceutical industry. However, one of the major questions that must be asked is whether the pharmaceutical model of medical research applies to research in manual treatments and therapies. Understanding the differences between pharmaceutical and manual procedures will allow the correct application of experimental design to the manual arts. Incorrect application of design principles will lead to false conclusions about the effectiveness of manual treatments and therapies.


In the discussion that follows, it is recognized that manual treatments are those performed by fully licensed physicians and therapies are those manual practices that are performed by other healthcare providers. The term manual procedures will be used to denote both practice types.



Types of study


Several recognized types of study can be applied to show the effectiveness of medical procedures (Patterson 2003), and they have varying levels of effectiveness in showing a causal relationship between procedure and outcome. The case study design is seen as the least effective model for showing effectiveness. Here, either a single or a few cases are reported, with the treatment given and the observed outcome. Because of various factors, in case report studies it is often difficult to interpret whether the treatment given actually produced the observed outcome. A somewhat more effective design is the prospective case study series, in which a protocol for identifying patients, a format for collecting data about the case, and a means of clearly identifying changes in symptoms after treatment are put in place. Here, owing to the systematic process, there can be somewhat more credibility in any reported change supposedly due to treatment. However, such case studies and series do little to actually establish a cause and effect relationship between the treatment and subsequent changes in disease state or function of the patient. There are of course other types of study design, such as epidemiological, survey, and descriptive, that are very valuable in biomedicine but which do not show cause and effect. These designs can often begin to pinpoint relationships that can then be studied with experimental designs (Patterson 2010).


Indeed, proving cause and effect relationships is very difficult, especially in human medicine. Over thousands of years of evolution and development, humans have developed a huge capacity to recognize correlations between events and outcomes. The rustle in the grass on a dark night correlates well with the approach of a predator looking for a meal. The human thus links the noise in the grass with danger of being eaten and retreats to the cave. However, the rustle may be caused by a number of things, such as a non-predator, the wind, or a family member returning from a night on the savannah. The wind does not cause the hearer to be eaten. Thus, we are very good at detecting correlations, but assigning cause and effect relationships is much more difficult.


Thus in order to begin to have data that can give some indication of real cause and effect relationships between treatment and effect or outcome, there needs to be an experimental design that allows comparisons between treated and untreated patients, or between patients given one or other of two treatments. These designs have varying levels of complexity and varying levels of explanatory power, depending on factors such as numbers of subjects, what is measured, and many others.


This chapter will examine the use of control groups in experimental designs and how they apply to manual procedures. Of major interest is the meaning of changes in outcomes in control groups that are attributed to the ‘placebo’ effect, and how this concept applies to manual procedures.



The gold standard


We will begin the examination of experimental designs, and especially control groups, by considering the ‘gold standard’ for such designs and how it may or may not apply to manual procedures. The current ‘gold standard’ for biomedical research is the randomized, double-blind placebo-controlled (RDBPC) clinical trial. This was developed in the 1940s and 1950s to answer a very specific question stemming from the introduction of the antibiotic drugs. For all practical purposes, the question that was to be answered was: ‘What is the effect of this drug on the natural course of a disease process in the human who is unaware of what drug, if any, is being given?’


The need in medical studies for the features of this design is well recognized. The necessity for randomizing subjects to the two or more arms of a study so as to have important variables such as age, gender, presenting symptoms, etc. equal in the various study groups is one of the baseline requirements of an experimental study. Randomization gives some assurance that the two or more groups of subjects are equal at the beginning of the trial on all important measures that might affect the outcome. This provides the necessary starting point for the trial. Provided that the assumptions of randomization are met and the groups are essentially identical, especially on those variables that are to be measured as outcomes, the ability to identify the effects of those drugs or procedures given to one group and not the other are more likely to be the result of the drug or procedure, and not to some underlying initial difference between the groups.


The blinding in experimental trials was established during the development of the model in order to guard against several factors other than the cause and effect relationship that could influence the outcome. First, and most importantly, the person collecting the data, in whatever form, must not know to which trial group the subject was assigned. It is readily acknowledged that even the most conscientious investigator can unwittingly affect the results of a study by judging the outcomes of a treated patient as better than those of the untreated patient (or the patient given a different treatment). In the worst case, the investigator may even consciously skew the results to favor the hoped-for outcome. Thus, in all experimental studies, the person or persons who collect the data must be blinded to patient assignment.


In drug trials the double-blind designation usually also refers to the patient and to the person administering the drug and the control substance. Here, the object is to keep the patient from knowing what substance they are receiving, and hence to avoid the possibility that this knowledge will sway the outcomes of the trial. Thus, in the typical drug study, the blinding is actually a triple blinding, with patient, drug administrator, and data collector blinded as to what is being given to the patient. It is impossible to triple blind procedure and surgical studies because the treater or surgeon must know what is being done.


The drug trial model also includes the administration of what is commonly known as a placebo. This is done so that patients do not know whether they are being given an active substance or one that has no effect on the course of the process or disease for which the drug is being tested. This inactive substance is known as the ‘placebo,’ and in strict form should look like, taste like, feel like, and weigh the same as the active drug. Thus, the patient and the substance administrator cannot deduce whether the patient is being given the active drug or the inactive substance. This then meets the requirements of the ‘gold standard’ drug trial.


As stated earlier, the goal of this design is very specific and tries to rule out or equalize all other factors that might influence the results or outcomes other than that of the active ingredient, the drug under study. Thus, for the question asked of most drug trials, the randomized, triple-blind placebo-controlled trial is appropriate and very useful in determining cause and effect relationships.



The issue of the placebo


Although it may seem simple to construct an acceptable placebo substance for most drug trials, problems arise almost immediately. What if the drug under consideration has certain side effects: for example causes some degree of nausea? If the placebo does not cause the nausea, the subjects in the experimental group have a different sensory experience from those in the placebo group, thereby potentially biasing the results. Thus, many drug trials in which the active ingredient causes some sensory experience for the patient attempt to mask that experience, or to create a placebo that also causes that experience but has no active effect on the process or organism causing the disease. Again, this is an attempt to keep the subject from knowing to which group he/she has been assigned. Thus, even in the best of circumstances, the design of a placebo-controlled study can be quite difficult.


However, the issue of placebo is much more than this. In 1955, Beecher (1955) published his famous article that set the stage for the debate about the effects of placebos that continues to this day. Based on his analysis of 15 studies, he claimed that in several diseases, 35% of patients could be successfully treated by the administration of a placebo alone (Kiene 1996a, b, Kienle & Kiene 1996). This shifted the concept of a placebo and its effects from something that was an inert substance and hence did nothing, to something that produced some effect on the patient. The placebo effect in the response to a given disease has been estimated to range from 0% to almost 100% of the total effect seen during treatment (Kienle & Kiene 1996). What has changed? The placebo as originally defined was an inert substance having no effect on the disease being studied. Suddenly, it is seen as having anything from none to almost curative effects. How can an inert substance have an effect on anything? The answer lies in a shift in thinking about the meaning of placebo and a shift in emphasis from the placebo to the placebo response. As originally stated, a placebo was given to keep the subject from knowing the group assignment and thus from providing information that would please the investigator (placebo-to-please) and thus bias the outcome measures. However, it is apparent that, given the definition of placebo – i.e., an inert substance with no effect – the placebo response cannot be caused by the placebo but must be caused by the patient. Benedetti (2009) has recently summarized this concept well by stating that the placebo response is ‘…a psychobiological phenomenon occurring in an individual or in a group of individuals.’ Thus, the emerging understanding of the patient’s response to a placebo that is effectively an inert substance having no effect is that the response is caused by the patient’s expectations, beliefs, and ideations, and not by the active treatment.


The literature on the placebo and the placebo response is huge. In 2006, Moerman indicated that a PubMed database search for just reviews of placebo yielded 10 062 articles. In May 2009 a search for placebos yielded 28 385 articles (Patterson search, 31 May 2009). It is evident that most of these articles are studies using placebo controls in one form or another, but many are attempts to define the characteristics of the placebo response itself. However, there is little agreement on what the response is, or how large it may be. Indeed, in their seminal article on the placebo concept, Kienle and Kiene (1996) argued that much of the so-called placebo response that has been reported can well be accounted for by such effects as the natural course of the disease process, regression to the mean, concomitant treatments, patients attempting to please, methodological defects in the studies, and misquotes, among other things. In their discussion, Kienle and Kiene suggest that psychosomatic phenomena are not to be considered placebo responses if they are not elicited by a specific placebo treatment (the administration of a placebo substance). They readily admit to the power of psychosomatic events on physiological function, but state that ‘When psychosomatic events are indiscriminately labeled “placebo effects,” both are shown in a false light: The placebo effect is given undue status, whereas psychosomatic effects are undeservedly discredited’ (Kienle & Kiene 1996). Thus, there is obviously no real agreement among major authors on the issue about what comprises a placebo response. Kienle and Kiene’s definition ties it to a specific circumstance, whereas Benedetti’s definition is much broader. In any event, it seems to entail effects that are not directly tied to the active ingredient being given in a drug trial.


To be fair, there are many more aspects to the placebo response than have been touched on here. Benedetti’s book and many of his articles (e.g., Benedetti 2008) discuss placebos in relation to specific circumstances and disease processes. In fact, one especially interesting study that Benedetti carried out involved the administration of pain-reducing drugs (Benedetti et al. 2003). The study used administration of narcotics either by a doctor at the bedside injecting the substance in an overt manner, or by a ‘hidden’ injection done mechanically without the patient’s knowledge that the drug was being injected. The results showed a marked increase in effectiveness with the overt administration (or, conversely, a decreased effect with the hidden administration). This is clearly a psychobiological effect that is a combination of the patient’s knowledge and the drug itself. Presumably the effect is mediated by the psychological knowledge influencing endogenous opioids that enhance the effects of the drug. Besides an interesting discussion of placebo effects, Benedetti also provides a discussion of the opposite effect, nocibos, which enhance pain and distress (Benedetti 2009, Enck et al. 2008).

< div class='tao-gold-member'>

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 22, 2016 | Posted by in MANUAL THERAPIST | Comments Off on The use of sham or placebo controls in manual medicine research

Full access? Get Clinical Tree

Get Clinical Tree app for offline access