Key Points
- •
Evidence-based management requires combination of the best evidence with patient values and provider preferences to make treatment decisions.
- •
The practice of evidence-based management involves question formulation, acquisition of related literature, appraisal of study quality, and the appropriate application of research findings to individual patients.
- •
Evidence-based management does not strictly depend on the results of randomized controlled trials, but more accurately involves the informed and effective use of all types of evidence.
Case 1
A 75 year-old female trips and fall, landing on her outstretch right hand. She complains of right wrist pain when presenting to fracture clinic a few days later. Initial radiographs show a minimally displaced distal radius fracture ( Fig.1 A–C ).
Case 2
A 22 year-old male patient presents to the emergency room complaining of wrist pain after falling off a motorcycle, Initial radiographs show a displaced radial styloid fracture ( Fig. 2 A–C ).
How can you come to an evidence-based decision in the management of each of these patients’ injuries?
What is the role of patients and provider preferences, practice environment and current literature in the end treatment chosen?
It has been more than a decade since the British Medical Journal named the emergence of evidence-based medicine as one of the 15 greatest medical milestones since the inception of journal in 1840. First described in 1991 by Dr. Gordon Guyatt, “evidence-based medicine” (EBM) describes a group of related principles initially developed by Dr. David Sackett and colleagues at McMaster University. Evidence-based medicine is described in Dr. Sackett’s seminal paper as, “integrating individual clinical expertise with the best available external clinical evidence from systematic research.” Gathering the best-available evidence requires a clear, clinically important question, followed by a thorough, systematic review of the literature, and finally a critical assessment of validity (i.e., relevance and quality) in light of the original question. Dividing the paradigm of evidence-based medicine into three key components, practitioners are required to apply (1) gathered evidence in light of (2) clinical expertise and (3) patient values.
In 2000, less than 5 years after the seminal paper by Dr. Sackett, the Journal of Bone & Joint Surgery first published the term “evidenced-based orthopedics.” The development and popularization of evidence-based orthopedics brought light to the novel challenge of applying EBM in orthopedics, for example, the difficulty of appropriately blinding or objectively assessing surgical interventions. Distal radius fractures, among the most common fractures of both the young and the elderly, have been a focus of evidence-based orthopedics since the inception of the term. A cursory search of Embase and MEDLINE from 1996 onwards for “distal” and “radius” and “fracture” yielded over 13,000 publications, more than 500 of which are randomized controlled trials (RCTs). The relative abundance of publications can complicate, rather than simplify, the application of EBM. For example, not all RCTs are of equal quality; failing to recognize low quality evidence, randomized or otherwise, may lead to inappropriate conclusions and can potentially encourage practices detrimental to patients. Alternatively, failing to recognize high quality evidence may delay the application of beneficial practices, again to the detriment of patients. It is vital for surgeons to understand not only the basic principles of EBM, but also the hierarchy of evidence, study design and quality, and the presentation of results.
Principles of Evidence-Based Management
The Evidence Pathway effectively organizes the key principles of evidence-based management into a simple algorithm:
- •
Assess : Identify, and understand the importance of, a clinical issue affecting patients and outcomes.
- •
Ask : Formulate a specific research question, directly related to the issue at hand, to be the foundation for a structured literature review. According to the PICO framework, a well-built clinical question identifies the patient population, intervention or exposure, comparator, and outcomes of interest.
- •
Acquire : Perform an objective, systematic search of databases other sources to obtain relevant evidence. Other sources may include (but are not limited to) bibliographies, research conference abstracts, and content experts.
- •
Appraise : Critically evaluate acquired evidence based on the hierarchy of evidence and the validity of results with respect to methodological quality and clinical relevance.
- •
Apply : In conjunction with patient values and clinician expertise, apply the collected, evaluated evidence.
Application of this framework allows surgeons to make evidence-based decisions, particularly when faced with the sprawling literature base informing treatment of distal radius fractures. The remainder of this chapter unpacks the above approach and will equip readers with the practical knowledge required to interpret, summarize, and apply the rest of this text.
Quality of Evidence and the Hierarchy of Evidence
For the busy clinician, deciphering an endless mountain of literature, personally appraising each article, and synthesizing numerous results across multiple outcomes may appear an insurmountable task. Fortunately, this process can be accelerated by grouping studies by similarities in methodology. Contemporary evidence hierarchies separate study designs by their susceptibility to bias, thereby providing an initial measure of quality. The findings of methodologically rigorous studies are less likely to be influenced by bias, known or unknown, and as such, are more likely to be valid. The most rigorous of designs, the RCT, occupies the top of the hierarchy, followed by controlled observational studies, uncontrolled case series, and then expert opinion. This broad grading system, introduced by Dr. Sackett, has been widely adopted across specialties and journals. To understand the hierarchy of evidence is to understand the merits and demerits (i.e., sources of bias) associated with each study design.
Randomized controlled trials exist at the top of the hierarchy due to their unique ability to mitigate multiple sources of bias. Although relatively rare across the vista of surgical literature, the use of RCTs is increasing worldwide. By randomly allocating patients to either an intervention (new treatment) or control arm (standard or no treatment), RCTs minimize the influence of selection bias by evenly distributing both known and unknown factors—potential confounding variables. Unfortunately, randomization alone does not mitigate other types of bias that can compromise study validity and necessitate demotion from Level I (highest quality) evidence to Level II. In addition to randomization, key methodological features of RCTs include allocation concealment, blinding, avoidance of expertise bias, minimization of attrition, and intention-to treat analysis.
Allocation Concealment —The investigators enrolling patients are unable to determine which treatment arm the next patient will be assigned to. Acceptable methods include central (internet or telephone based) allocation, or the use of sequentially numbered, sealed, opaque envelopes. Allocation methodology susceptible to bias includes use of patient chart numbers, odd/even dates, or unsealed envelopes.
Blinding —Involved individuals are unaware of which treatment arm the patient has been allocated to. Groups that can be blinded include patients, clinicians, outcome collectors, outcome adjudicators, data analysts and manuscript writers. The more groups that are blinded, the less likelihood there is of performance or detection bias due to knowledge of treatment allocation.
Expertise Bias —The differential ability of a clinician to apply the intervention or procedure, due to skill or prior beliefs. This may occur when a surgeon is asked to perform a procedure that they lack proficiency in or that they believe to be ineffective, compared to the alternative treatment arm.
Attrition —Loss of patients to follow-up to the point where the final cohort may no longer represent the original cohort. Traditional thresholds have required at least 80% of patients to be included at final follow-up. Bias may occur if those who drop out of a trial systematically differ from those who remain.
Intention to Treat Principle —The analysis of patient outcomes by the treatment group to which they were allocated to, regardless of the treatment ultimately received during the trial. This form of analysis preserves the power imparted by randomization to balance the distribution of known and unknown factors among the treatment groups.