Joseph T. Patterson MD1 and Saam Morshed MD PhD MPH2 1 Keck School of Medicine of the University of Southern California, Los Angeles, CA, USA 2 Orthopaedic Trauma Institute, San Francisco General Hospital and the University of California, San Francisco, CA, USA Published medical research enhances our understanding of disease and helps us critically evaluate the efficacy of our treatments. As the volume of published research grows, it becomes unrealistic to attempt to read primary source literature on a clinical question (Figure 3.1). The purpose of a review is to summarize updates from recent research, outline the scope of a topic, or pool data from multiple studies to draw insights not obtainable with a single study. A hierarchy of reviews exists regarding methodology, objectivity, and clinical utility. The purpose of this chapter is to equip the reader with an understanding of the appropriate role of each type of review, as well as the tools to create each. Narrative reviews, scoping reviews, and systematic reviews are descriptive, or nonquantitative. Meta‐analysis involves additional statistical comparisons of treatment effects using data pooled from multiple studies. Network meta‐analysis indirectly compares more than two treatments by linked analyses of common treatments across multiple studies. A narrative review is a selected summary of primary literature, often for a concise synopsis of recent advances or reference guide for readers new to a topic. A narrative review is not the most objective source of evidence for it is vulnerable to multiple types of bias. Selection bias arises from article inclusion or exclusion without specific criteria. Inattentive data abstraction produces measurement bias. Reporting bias stems from disingenuous descriptions of methods or data. Narrative reviews may include expert interpretations based on authors’ experience. Confirmation bias may occur if the authors report only findings that support their personal beliefs. Time lag bias may occur if authors may omit new reports of efficacious treatment or advocate a therapy that has since been proven harmful or ineffective.1 A systematic review is a scientific investigation of published literature that objectively summarizes available evidence. Cook et al. defined it as, “the application of scientific strategies that limit bias to the systematic assembly, critical appraisal, and synthesis of all relevant studies addressing a specific healthcare question.”2 The scope is narrow, often a single question on a specific topic. The review process is an algorithmic assembly and assessment of original studies as “subjects” from multiple sources following a prospectively defined protocol.3 The protocol, which makes a systematic review a reproducible investigation, specifies the sources and search strategy for identifying potentially relevant articles, inclusion and exclusion criteria for article selection, and methods for data abstraction and analysis. When included studies are sufficiently similar to statistically pool effects, meta‐analysis may be performed, as discussed below. The level of evidence of the review is dictated by included studies: level I–II evidence from randomized controlled trials (RCTs) will produce a level I–II systematic review, while level III and IV evidence may have meaningful roles in the study of rare events or justifying need for additional research on a sparsely studied topic.4 A scoping review is a truncated systematic review that maps the existing literature on a subject in terms of the volume, nature, and characteristics of the primary research.5 This review is useful when the topic has not yet been extensively reviewed, is complex, or appears heterogeneous.6 As a rigorous and transparent method for mapping areas of research, a scoping review can be a standalone project to synthesize findings and identify gaps in the existing literature, or a preliminary step to a systematic review that defines the potential breadth and cost of undertaking a full systematic review.5,7 The major limitation of a scoping or systematic review is sensitivity to publication bias. Treatment effects may be overestimated: published trials are more likely to describe positive treatment effects,8 negative results are less likely to be published,9 and unpublished negative results are difficult to locate. Guidelines for systematic reviews have evolved with innovations in review design and analytic methods. The Quality of Reports of Meta‐analysis (QUOROM) guidelines morphed into the Preferred Reporting Items for Systematic Reviews and Meta‐analyses (PRISMA) statement.10 The Cochrane Collaboration produces systematic reviews to inform health decision‐making using an even more stringent quality standard.11 Adherence to reporting guidelines is associated with greater citation rate and scholarly impact.12 Prospectively and publicly registering a protocol on a registry, such as the International Prospective Register of Systematic Reviews, better known as PROSPERO, can prevent unwitting duplication by others and uncover reporting bias if a completed review does not match what was planned.13 Meta‐analysis is the quantitative investigation of data aggregated through a systematic review of data reports.14 Meta‐analysis can be performed from published results without original or patient‐level data. When individual patient data can be acquired, more nuanced analyses of predictors and effect modifiers may be conducted through patient‐level meta‐analysis. In meta‐analysis, studies must be sufficiently homogeneous for valid comparison: studies must evaluate the same the test, exposure, or treatment and assess similar outcomes. The outcomes of multiple studies are pooled: the number of patients and events in each treatment or exposure group are summed across studies. Association between the exposure/treatment and the outcome is tested with the pooled data, often weighted by the sample size of each study. Pooling studies improves the power of the statistical analysis, increasing the likelihood that any association is statistically significant. However, pooling studies does not eliminate bias or improve the quality of the studies. We describe the tools to rigorously perform and evaluate conventional meta‐analysis in this chapter. The transitive property: if A = B and B = C, then A = C. Two distinct, direct comparisons support an indirect, inferred comparison. Network meta‐analysis – also called multiple treatments meta‐analysis, or mixed‐treatment comparisons – expands on this concept to assess the relative effect sizes of more than two interventions have on one outcome.8 Several trials each comparing two or more interventions on one common outcome each provide direct evidence. Indirect comparisons can be made between interventions not directly compared in an actual trial by organizing the studies into closed‐ and open‐loop networks.9,15 Transitivity describes similarity in patients, interventions, and outcomes across the studies in the network. If patients from all arms of the various trials do not meet the inclusion/exclusion criteria of a single intervention arm, then the principle of transitivity has been violated and the network meta‐analysis is not credible.9 Incoherence is disagreement between direct and indirect estimates; this may arise from bias in the indirect comparisons due to intransitivity or bias in the published direct comparisons. Numeric or visual estimates of relative treatment effect estimates may underrepresent uncertainty and do not convey bias, incoherence, or transitivity.16 Conclusions about treatment superiority from network meta‐analysis should be made with caution. The following sections guide the reader through a workflow for performing a high‐quality systematic review and conventional meta‐analysis that conforms to the PRISMA statement and Cochrane Collaboration definitions.10 For further details, we recommend the Cochrane Handbook17 as well as texts by Petitti18 and Egger et al.19 Table 3.1 Criteria for study inclusion in a systematic review of a treatment.
3 Systematic Reviews and Meta‐Analyses
Introduction
Top four questions
Question 1: What are the types of literature reviews?
Narrative review
Systematic review
Scoping review
Meta‐analysis
Network meta‐analysis
Question 2: How is a systematic review performed?
Date of publication
Studies of a medication published after the date of regulatory (e.g. Food and Drug Administration) approval of the medication may reduce vulnerability of the review to reporting bias by excluding trials sponsored by the manufacturer during the regulatory approval process
Language
Including and searching as many languages as feasible for the review team minimizes selection bias
Full text translation may be necessary
Study design
Type and methods of studies by ICMJE Levels of Evidence4
Target population
The demographics of the patients and specific conditions of interest
Intervention
The exposure or treatment of interest
Comparator
The control group, i.e. placebo or no treatment
Primary outcome
Be specific
Heterogeneity in the methods of outcome assessment between studies will negatively affect the validity of pooled analyses
For example, for a primary outcome of rate of deep vein thrombosis, the modalities of assessing that outcome (venography, Doppler ultrasound, or telephone survey) have different sensitivity, specificity, and accuracy
Secondary outcomes
Stay updated, free articles. Join our Telegram channel
Full access? Get Clinical Tree
Get Clinical Tree app for offline access