Intraoperative Management of Adult Lumbar Scoliosis




© Springer International Publishing AG 2017
Eric O. Klineberg (ed.)Adult Lumbar Scoliosis10.1007/978-3-319-47709-1_9


9. Intraoperative Management of Adult Lumbar Scoliosis



Dana L. Cruz1, Louis Day1 and Thomas Errico 


(1)
Spine Research Institute, NYU Langone Medical Center, New York, NY, USA

(2)
Department of Orthopaedic Surgery, NYU Langone Medical Center, New York, NY, USA

 



 

Thomas Errico




Introduction


While numerous studies demonstrate the benefits of operative correction of adult lumbar scoliosis, these surgeries are not without serious risks [14]. Recent studies estimate the rate of complications to be as high as 80 % in certain populations following decompression and fusion [1, 57] with significantly greater risk associated with increased age, construct length, number of osteotomies, and revision [1, 5, 8]. Major complications occurring in the perioperative period include, for example, vascular injury, excessive blood loss, deep vein thrombosis, nerve root injury, and deep wound infection as well as life-threatening complications such as sepsis, myocardial infarction, pulmonary embolism, and catastrophic neurologic injury.

The rate of complications associated with operative correction of lumbar scoliosis is not entirely attributable to the procedures alone. Due to the frequently degenerative nature of lumbar scoliosis, the afflicted population tends to be older, often with multiple medical comorbidities. Even in the absence of comorbid conditions, older patients demonstrate diminished physiologic reserve compared to their younger counterparts including cardiac, renal, pulmonary, and immunologic functions contributing to an increased vulnerability to external insults. In a 2007 study of outcomes following adult spinal deformity surgery, authors estimated that patients older than 69 years were nine times more likely to experience a major complication compared to those younger than 69 years [8]. Combined with comorbid disease states, these patients are increasingly susceptible to a cascade of peri- and postoperative complications. Despite the risks associated with advanced age and comorbidity, however, it is important to recognize that these patients with lumbar scoliosis exhibit staggering preoperative disability and have the greatest potential for improvement after surgery [2].

Notwithstanding medical optimization and consideration for patient risk factors, there are a number of perioperative interventions with the potential to significantly reduce the risks associated with these complex procedures. The primary goal of this chapter is to introduce several concepts of intraoperative management with the ultimate goal of reducing complications and improving overall patient safety.


Blood Conservation Techniques


Blood loss volumes ranging from 500 mL to >4 L are not uncommon in spinal deformity surgery, and though no single definition of excessive blood loss exists between studies, it is frequently cited as the most common complication [9, 10]. Moller et al., in a prospective study of patients undergoing lumbar spinal fusions, found that instrumented patients experienced an average blood loss of 1.5 L with one patient suffering as much as a 7 L loss [11]. In another study of 199 patients with degenerative lumbar deformity undergoing fusion, more than half of patients experienced blood loss greater than 500 mL, which corresponded with a significantly greater rate of perioperative complications and length of hospital stay [10]. Intraoperative blood loss is predictably correlated with the number of fusion levels and operative time as well as number and type of osteotomies. A posterior vertebral column resection, for example, can be associated with blood loss of 10 L [12]. Excessive blood loss in these settings contributes to greater fluid shifts, which can detrimentally impact cardiac, pulmonary, and renal function [13]. Efforts to reduce perioperative blood loss in major spine surgery are an important step to improve patient outcomes.

In general, the two types of strategies used to prevent the sequela of excessive blood loss are responsive and preventative. Responsive measures are those methods that treat the resulting hypotension and anemia with blood replacement, fluid administration, and medications such as vasopressors. Though these tools are employed with great success in trauma patients with major blood loss, recent investigations examining the ratios and volumes of crystalloids, colloids, and blood products administered to spine patients suggest potentially negative influences on postoperative recovery including, for example, extubation status [14, 15]. These authors suggest that further investigation be pursued to better describe shifts between fluid compartments and optimal resuscitative protocols in this patient population.

Blood transfusions are also not without risks. Even blood salvage procedures which collect and reinfuse autologous blood have been shown to cause minor transfusion reactions including fever, chills, and tachycardia [16] and are associated with significantly increased costs [17]. Furthermore, exposures to allogenic blood products are known to increase the risk for disease transmission, hypothermia, coagulopathy, hyperkalemia, hypocalcemia, and transfusion reactions while increasing data suggest those products impair immune response and potentially increase risk of postoperative infections [1820]. Though exceedingly rare, transfusion-related acute lung injury, hemolytic transfusion reactions, and transfusion-associated sepsis are now known to be the leading causes of allogenic blood transfusion-related deaths [21, 22]. Until relatively recently, these responsive measures have served as the primary methods used to address excessive blood loss intraoperatively and demonstrate measured success.

In addition to the resuscitative measures used to limit the impact of severe blood loss, several blood conservation methods have been effectively employed for surgeries where blood loss is of significant concern. These preventative methods include controlled hypotensive anesthesia, autologous blood donation, and antifibrinolytic administration to name just a few. Even simple maneuvers such as patient positioning can influence blood loss during spinal surgery [23]. By allowing the abdominal contents to hang freely using a Jackson frame, intra-abdominal pressures are reduced and transmitted to the inferior vena cava and epidural venous system, thereby reducing bleeding at the operative site. Combined or used in isolation, these preventative measures have the potential to drastically reduce perioperative blood loss and improve patient outcomes.


Controlled Hypotensive Anesthesia


Since as early as the 1970s, spine surgeons have advocated for hypotensive anesthesia as a method to reduce blood loss and improve visualization of the surgical field [24]. Several medications have been used historically to achieve a mean arterial pressure (MAP) of 60 mmHg including ganglion blocking agents, calcium channel antagonists, nitroprusside, and nitroglycerin though little evidence which supports the superiority of any one particular agent [2527]. Reduced intraoperative blood pressure leads to a direct reduction in bleeding from injured arteries and arterioles while venous dilation decreases bleeding especially from cancellous bony sinuses that do not collapse when transected. Controlled hypotensive anesthesia is a blood conservation technique frequently used in AIS cases with good results; however, caution must be used in its application to older patients. While adolescents may well tolerate a MAP of 50–60 mmHg, patients with carotid artery stenosis or coronary artery disease, for example, are at increased risk of hypoperfusion and end-organ injury. The careful balance of hypotension and perfusion are particularly relevant in the context of deformity surgery given the sensitivity of neurophysiologic monitoring modalities to cord hypoperfusion. This is especially true in deformity surgery that may require a three-column osteotomy in the setting of an already sick or injured spinal cord. Despite the risks, previous studies have demonstrated a reduction in blood loss and transfusion requirements with the use of this technique alone or in combination with other techniques [2831].


Autologous Blood Donation


Autologous blood donation is another technique frequently practiced in the United States used to decrease rates of allogenic transfusion during spine surgery [32]. The two most common applications of autologous blood donation include pre-donation and acute normovolemic hemodilution. Pre-donation of autologous blood occurs between 1 and 4 weeks prior to surgery and may be supplemented with the administration of recombinant erythropoietin [33]. Acute normovolemic hemodilution is a similar method used to obtain autologous blood for transfusion; however, it is typically performed on the day of surgery with replacement of blood volume using crystalloid fluids to achieve normovolemia [34].

Though pre-donation of autologous blood is proven to reduce rates of allogenic transfusion [35], recent publications have questioned its overall value [36]. Several disadvantages of receiving allogenic blood products are similarly shared with pre-donated blood. Of significant concern, pre-donation and storage of autologous blood is an expensive procedure which stores blood as pRBCs without coagulation factors. Additionally, the procedure of pre-donation does not eliminate the risk of receiving the “wrong” blood and may expire in the event of a rescheduled surgery. Lastly, critics of pre-donation blood programs point to the risk of preoperative anemia; however, the use of recombinant erythropoietin has the potential to reduce this risk [33].

Acute normovolemic hemodilution is a blood conservation method with the benefits of autologous blood donation yet fewer disadvantages compared to pre-donation. Because blood is collected for same-day transfusion, a superior blood replacement, whole blood, is made available for use intraoperatively. Additionally, the expensive procedure of collecting and storing pre-donated blood is avoided while the risk of receiving the “wrong” blood is substantially reduced. And lastly, though normovolemic hemodilution should only be used for patients with normal preoperative hematocrits and recombinant erythropoietin, and other supplements may be used for preoperative augmentation. Despite the demonstrated value of autologous blood donation, national trends illustrate increasing rates of allogenic blood transfusions over the last several years with declines in pre-donated autologous blood transfusions and stable perioperative autologous transfusions [32].


Antifibrinolytics


For generations, clinicians have sought pharmacologic methods to reduce perioperative bleeding and its associated morbidity and mortality. Following the elucidation of the hemostatic pathway, the fibrinolytic system became the logical target of those pharmacologic agents. Derived from bovine lung, aprotinin was the first antifibrinolytic agent introduced into clinical practice in 1950 for the treatment of pancreatitis and later prophylactically to reduce blood loss in complex cardiac surgery [37]. Aprotinin quickly became ubiquitous within several surgical specialties as studies demonstrated an impressive reduction in postoperative blood loss and transfusion [3740] by competitively inhibiting plasmin. Despite later safety concerns regarding the use of aprotinin in complex cardiothoracic surgery and its subsequent withdrawal from the US market [41, 42], the benefits of antifibrinolytics have been demonstrated across several surgical specialties leading to their pervasive use [4348].

The two antifibrinolytics used most widely today are synthetic lysine analogues, tranexamic acid (TXA), and ε-aminocaproic acid (EACA). Both discovered and described by Okamoto in the 1950s [49, 50], these agents act by competitively inhibiting the lysine binding sites of plasminogen, plasmin, and tissue plasminogen activator, thereby inhibiting the lysis of polymerized fibrin. Tranexamic acid is seven to ten times more potent than EACA [51]. TXA and EACA are widely used prophylactically when large blood losses are anticipated including cardiac, trauma, liver, obstetric, neurosurgical, and orthopedic surgeries. Though recent studies employ various dosing regimens for TXA, there are current efforts to determine the superiority of high versus low dosing regimens in randomized controlled studies. Meanwhile, retrospective cardiothoracic studies suggest TXA reduces blood loss more effectively than EACA [52, 53], though no prospective studies involving surgery of the spine support this conclusion.

Numerous studies throughout the orthopedic and spine literature demonstrate the efficacy of TXA and EACA in reducing perioperative blood loss and transfusion requirements [44, 45, 5459]. In a 2015 study by Xie et al., authors examined more than 50 patients undergoing complex spine deformity correction and found that those patients who received high-dose TXA demonstrated a statistically significant reduction in blood loss (2441 ± 1666 mL vs. 4789 ± 4719 mL) and decreased rate of transfusion compared to controls without an increase in complications [60]. Similarly, in a recent meta-analysis including 11 randomized controlled trials (644 total patients) investigating the use of TXA on surgical bleeding in spine surgery, authors found that TXA reduced intra-, post-, and total operative blood loss, leading to a reduction in the proportion of patients who received blood transfusions [56].

Despite the proven reduction in intraoperative blood loss, the potential complications of antifibrinolysis remain controversial. By the very nature of their mechanism, the antifibrinolytics have the theoretical potential to promote thromboembolic events such as deep vein thrombosis or pulmonary embolism. While recent evidence does suggest that aprotinin increases the risk of myocardial infarct, cerebrovascular accident and death in the context of complex cardiothoracic procedures, no studies to date demonstrate detrimental prothrombotic effects of the lysine analogues EACA or TXA despite their use for more than 50 years [56, 57, 61]. Of more significant concern, in recent years TXA has been linked to the occurrence of seizures, particularly at high doses. In a retrospective investigation of postoperative seizures among patients undergoing aortic valve replacement with cardiopulmonary bypass, authors found 6.4 % of patients who received TXA experienced seizure within 24 hours of surgery compared to 0.6 % of patients who received EACA [52]. Despite this association demonstrated in retrospective studies of cardiothoracic patients, no prospective trials to date support the association between TXA and increased seizure risk. Nevertheless, these authors acknowledge that many questions remain unanswered regarding the unintended effects of TXA and other lysine analogues that necessitate further study on their usage and safety in numerous clinical applications.


Intraoperative Neurophysiologic Monitoring


Risk of neurologic injury is inherent to all spine surgeries and as such many tools have been developed to prevent and identify this complication in the intraoperative setting. Prior to the widespread adoption of intraoperative neurophysiologic monitoring (IONM), studies estimated an incidence of postoperative neurological deficit between 0.5 and 17 % in patients undergoing corrective surgery for scoliosis with more than half representing partial or complete paraplegia [6264]. As early as the 1970s, recognizing the frequency and impact of these devastating complications, clinicians utilized advanced techniques of monitoring electrophysiologic potentials in order to predict and prevent serious neurologic insult in real time. Since their introduction, modern IONM has become the standard of care for complex reconstructive spine surgery [6567], sharply reducing the incidence of postoperative neurologic deficits [6668]. Following extensive research and broad uptake, in 2009 the Scoliosis Research Society concluded that neurophysiological monitoring is the “preferred method for early detection of an evolving or impending spinal cord deficit,” with the Stagnara wake-up test as a useful adjunct [69].


Stagnara Wake-Up Test


Prior to the popularization of IONM during spine surgery, the Stagnara wake-up test was routinely used to assess neurologic function in the semiconscious patient after instrumentation prior to closure [64, 70]. In 1973, Vanzelle et al. published their principal work describing case reports of routine motor assessments in the awakened patient during surgical correction of severe spinal deformity [70]. During this gross assessment of motor function, patients are asked to move their hands and feet to predict postoperative paraplegia. Authors demonstrated in this study that in some circumstances, patients regained voluntary motor control after initial loss followed by removal of hardware.

Though the wake up test is easily performed and reliable in predicting postoperative motor deficit, clinicians note several practical limitations. In order to perform the wake-up test, the patient must be able to follow commands and is necessarily brought to a semiconscious state with weaning of anesthesia, a process that can take several minutes, prolonging intraoperative time and decreasing the potential for neurologic recovery following injury. Furthermore, strong proponents of IOMN note the delay in injury identification and the challenge in discerning the inciting event or instrumentation. Despite these limitations, given its reliability to predict neurologic deficit, the wake-up test is frequently the standard against which other methods of neurophysiologic monitoring are compared.

The earliest applications of modern neurophysiologic monitoring techniques date back to the 1940s whereby physicians examined changes in electrical potentials detected on the scalp in response to electrical stimulation of peripheral nerves [71]. Since that time, significant advancements in technology and neuroscience have propelled the field of neurophysiology, allowing examiners to closely monitor various pathways between the central and peripheral nervous systems. Application of this field in the surgical setting allows clinicians to monitor neurologic status while the patient is under anesthesia and unable to participate in the traditional neurologic exam. For nearly half a century, clinicians have utilized these advanced methods of neurophysiologic monitoring to improve safety during complex spine procedures.


Somatosensory Evoked Potentials


Somatosensory evoked potential (SSEP) monitoring was one of the earliest applications of IOMN and continues to be the most widely used modality today. These potentials are generated with stimulation of peripheral nerves distal to the spinal cord region being assessed and measured at the corresponding sensory cortex. SSEPs are evaluated with regard to amplitude, latency, and signal velocity and may be continuously monitored throughout surgery. Given our understanding of the somatosensory pathway, SSEPs illustrate the integrity of the dorsal column-medial lemniscus pathway including the peripheral nerve, dorsal column, medial lemniscus, thalamus, and primary sensory cortex. Generally, the median and ulnar nerves are utilized for SSEP monitoring in the upper extremity while the posterior tibial or peroneal nerves are used in the lower extremity. Intact, this pathway mediates tactile sensation, vibration, and proprioception.

When compared with other IONM modalities, SSEP monitoring has important advantages. The simple, low-amplitude characteristics of the SSEP waveform make it a highly specific indicator of neurologic injury. In a 1995 study of more than 50,000 scoliosis cases in which SSEPs were used, authors calculated a sensitivity and specificity for new postoperative motor defects of 92 % and 98 %, respectively [66]. Subsequent studies have reported sensitivities ranging from 25 to 52 % with specificities in the range of 95–100 % [68, 7275].

Though SSEP monitoring is the most frequently used IONM modality, there are several limitations to its use as a standalone tool. As discussed previously, SSEPs provide information regarding the integrity of the dorsal column-medial lemniscus tract with excellent reliability [7678] without providing information regarding corticospinal function. SSEP monitoring can be critically important, for example, while passing sublaminar wires, a notable opportunity for direct injury to the dorsal columns, though it is of little utility in the event of nerve root injury. Additionally, SSEP interpretation can be confounded by systemic conditions in the absence of neurologic injury. Hypotension, hypothermia, hypocarbia, hypoxemia, anemia, and even specific anesthetics all have the potential to attenuate the SSEP signal. Lastly and perhaps most critically, SSEP interpretation requires temporal summation and averaging which can delay detection of an acute injury. Dependent on ambient noise, detection of a significant signal change may lag by 5 min or more from the time of injury reducing the window of opportunity for successful intervention. In a 2004 comparison of SSEPs and motor evoked potentials (MEPs), authors found that SSEP signal alterations lagged behind those of the MEPs by 16 min on average, with one patient demonstrating a 33-min delay in detection [74].

Despite the various limitations of its use, studies demonstrate that SSEP monitoring reduces rates of postoperative neurologic deficit. In a retrospective review of 295 patients undergoing spinal stabilization following acute injury, authors identified new postoperative neurologic deficit in 0.7 % of patients monitored intraoperatively with SSEP compared to 6.9 % of patients who were unmonitored or tested by wake up alone [79]. In a similar comparison of patients undergoing cervical spine surgery, Epstein et al. identified eight (3.7 %) of 218 unmonitored patients with postoperative quadriplegia compared to 0 instance in 100 patients monitored by SSEP [80].


Motor Evoked Potentials


The direct monitoring of the corticospinal pathway via motor evoked potentials (MEPs) gained widespread use following improvements to Merton and Morton’s 1980 landmark work describing transcranial stimulation of the motor cortex. MEPs are similar to SSEPs in that they are used to assess a specific pathway between the central and peripheral nervous systems and are evaluated with regard to amplitude, latency, and signal velocity. In contrast to SSEPs, however, MEPs are generated with transcranial stimulation of the motor cortex and measured distally at multiple upper and lower extremity muscle groups. In this way, MEPs illustrate the integrity of the entire motor axis including the motor cortex, corticospinal tract, nerve root, and peripheral nerve similar to the SSEP and the somatosensory pathway. Intact, the corticospinal pathway mediates voluntary muscle contraction.

There are several distinct advantages to MEP monitoring when compared to SSEP. Unlike SSEP monitoring which is highly specific in predicting postoperative somatosensory deficit, MEPs describe the integrity of the motor axis, a domain of significant functional importance. Furthermore MEP monitoring demonstrates excellent sensitivity in detecting postoperative motor deficits and even demonstrates good reliability in detecting spinal cord ischemia [8183]. In a 2007 study involving more than 1100 cases of scoliosis, MEP monitoring demonstrated 100 % sensitivity in identifying postoperative motor loss, compared to SSEP which demonstrated a sensitivity of 43 % [68]. Several other studies report similar MEP sensitivities ranging from 75 to 100 % and specificities ranging from 84 to 100 % [68, 7375, 8487]. Lastly, MEP monitoring in contrast to SSEP monitoring, which requires averaging of potentials for interpretation, allows for immediate assessment of corticospinal integrity without delay.

Though the use of MEP monitoring confers important advantages compared to SSEPs, significant disadvantages exist as well. The ability to monitor the entire motor axis from the cortex to the peripheral nerve and muscle requires a complete and functional pathway. As such, the utility of MEP monitoring is significantly diminished with the use of inhalation anesthetics which decrease MEP amplitude and increase latency [88] and muscle relaxants [84] which interfere with transmission at the neuromuscular junction. For these reasons total intravenous anesthesia is the anesthetic of choice during MEP monitoring [82], though in practice low-dose halogenated agents such as isoflurane are frequently used. Similar to the effects on SSEP signal, systemic conditions such as hypotension, hypothermia, hypocarbia, hypoxemia, and anemia may attenuate MEPs, further complicating their interpretation. Unlike SSEP monitoring, which can be performed continuously throughout surgery, MEP monitoring is performed intermittently though permitting immediate assessment after high-risk maneuvers. Lastly, and of great importance, the characteristics of the MEP waveform make their interpretation challenging [89]. MEPs demonstrate high amplitude with much greater variability when compared to SSEPs. A change in signal amplitude following instrumentation therefore can be the result of neurologic insult or a characteristic of the waveform. For these reasons, several definitions of warning criteria are used in the interpretation of MEPs with varying sensitivities and specificities.


Electromyography


Electromyography (EMG) is another valuable tool used by clinicians to monitor neurophysiologic status in the intraoperative setting. Briefly, EMG is a procedure which monitors compound action potentials in specific muscle groups either with (triggered EMG, tEMG) or without stimulation (spontaneous EMG, sEMG) proximally. Because postoperative radiculopathy is a complication encountered more frequently than spinal cord injury, EMG monitoring is of particular utility in the setting of spinal instrumentation, providing the ability to monitor selective nerve roots at risk of injury.

Spontaneous EMG (sEMG) and triggered EMG (tEMG) are two common applications of EMG with distinct applications, advantages, and information conveyed. sEMG monitoring is performed without stimulation of the nerve root, producing a continuous recording of activity within select muscle groups. At baseline, a healthy nerve root does not produce activity, whereas irritation or injury during surgery results in distinctive patterns of neurotonic discharges. Phasic type discharges, for example, are most often associated with blunt mechanical trauma, whereas tonic waveforms are frequently the result of nerve ischemia due to traction, heat from electrocautery or irrigation [90]. tEMG, on the other hand, is a technique which makes use of nerve stimulation to record conduction velocity and amplitude at the muscle. Based on this principle, tEMG is a particularly useful modality in assessing pedicle screw placement [91]. A pedicle screw, well-positioned in cortical bone, should be electrically insulated. Any change observed in EMG following direct stimulation of that screw therefore is assumed to be in close contact with the nerve root, and further investigation of pedicle integrity should be promptly pursued.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 14, 2017 | Posted by in MUSCULOSKELETAL MEDICINE | Comments Off on Intraoperative Management of Adult Lumbar Scoliosis

Full access? Get Clinical Tree

Get Clinical Tree app for offline access