Brief History of Fluoroscopy
Fluoroscopy is an indispensable part of the orthopedic traumatologist’s armamentarium. Although it could be argued that intraoperative imaging may be overutilized, it is almost unimaginable to consider embarking on the reduction and fixation of a complex fracture, especially when using indirect reduction techniques, without fluoroscopy. This technology dates back to the very earliest days of radiography. Within 1 year of Roentgen’s discovery of x-rays, in 1896, Thomas Edison developed the “fluoroscope.” This real-time viewing of x-ray images utilized a simple fluorescent screen in a light-tight viewing cone. The imaging process has since been known as “fluoroscopy.”1 For the first half of the 20th century, little changes were made to this basic practice. High doses of x-ray were required, and cumulative exposure times were often minutes long rather than seconds. This combination caused excessive exposure doses to patients and staff, limiting the utilization of this technology. In 1948, John Coltman developed the image intensifier that converted x-rays to an electron beam that could be accelerated and focused on a fluorescent screen. The light emitted could be thousands of times brighter with the image intensifier than without the image intensifier, thus reducing the doses of radiation required. Fluoroscopy could then be used with reasonable safety in more routine applications, including fracture care.
Technical Considerations
Fluoroscope Components
A typical fluoroscopic system (Fig. 1-1) includes the x-ray generating tube, a collimator, an image intensifier, and a video camera. The image intensifier is a tube with a fluorescent screen (input phosphor) that glows with the image produced by the x-ray pattern that exits the patient. The light from the input phosphor causes ejection of electrons from a photoelectric material adjacent to the input phosphor. These electrons are accelerated via a high voltage (30 kV) and focused onto a small (1-inch diameter) screen (the output phosphor). The output phosphor glows much more brightly than does the input phosphor (about 3,000 times) because of the energy gain provided by the acceleration of the electrons and also because of minification of the image. The image on the output phosphor is monitored via a video camera system.
- X-ray generator—Produces electrical energy and allows selection of kilovolt peak (kVp) and tube current (mA) that is delivered to an x-ray tube.
- X-ray tube—Converts electrical energy of x-ray generator to x-ray beam.
- Collimator—Contains multiple sets of shutters (round and rectangular blades) that refine the x-ray beam shape. Collimating the beam to the area of interest reduces the exposed volume of tissue and results in less scatter and better image contrast. It also reduces the overall patient and surgeon radiation dose by minimizing scatter and direct exposure.
- Image intensifier—Converts x-rays to photoelectric energy. Major components include an input layer (input phosphor + photocathode) to convert x-rays to electrons, an image intensifier tube to accelerate and focus the electrons, and an output layer (output phosphor) to create a visible image.
- Video camera system—Captures the image and displays it on a video monitor.
X-Ray Basic Physics
X-rays generated by fluoroscopy and plain radiography are forms of electromagnetic radiation. Other examples of electromagnetic radiation include visible light and radio waves. X-rays are produced when a heated filament (negatively charged cathode) within a tube generates electrons that are accelerated by application of high voltage (50 to 150 kVp) toward a tungsten target (positively charged anode). The electrons, repelled by the cathode and pulled toward the anode, accelerate to more than one-half the speed of light in one inch of travel. The electrons impact the anode and suddenly slow. The energy lost by the slowed electrons is converted to heat and creation of electromagnetic radiation including infrared light, visible light, ultraviolet waves, and x-rays.
The flow of electrons from the filament to the target is called the tube current and is measured in milliamperes (mA). Fluoroscopy is normally performed using 2 to 6 mA and an accelerating voltage of 75 to 125 kVp. The rate of x-ray production is directly proportional to the tube current, but is more sensitive to increasing voltage than current. For example, increasing the kVp by 15% is equivalent to a 200% increase in the mA.
When x-rays traverse tissue, they can result in (1) complete penetration, (2) total absorption, or (3) partial absorption with scatter. Complete penetration means that the x-rays completely passed through the tissue, resulting in an image. Total absorption means that the x-ray energy was completely absorbed by the tissue, resulting in no image. Partial absorption with scatter involves partial transfer of energy to tissue, with the scattered x-ray possessing less energy and following a different trajectory. The scattered radiation is responsible for causing radiation exposure to the operator and staff.
Units of Radiation Exposure and Dose
Radiation exposure is defined as the quantity of x-rays required to produce an amount of ionization in air at standard temperature and pressure. The traditional unit of exposure is the Roentgen (R), which is defined as R = 2.58 × 10−4 C/kg air. The SI unit is Coulombs/kilogram (C/kg). The unit Roentgen, however, is only defined for air and cannot be used to describe dose to tissue. An absorbed dose of radiation can be measured in rad (Radiation Absorbed Dose). The SI unit is the Gray (Gy) where 1 Gy = 100 rad. Dose equivalent accounts for differences in biological effectiveness of different types of ionizing radiation. Dose equivalent is equal to absorbed dose (Gy or rad) multiplied by a radiation quality factor specific to the type of radiation being used. The traditional unit is the rem (Roentgen Equivalent in Man); the SI unit is the sievert (Sv) where 1 Sv = 100 rem. In diagnostic x-ray, the radiation quality factor is 1, so 1 rad is equivalent to 1 rem. The effective dose equivalent (EDE) takes into account that the potential health effect from single organ exposure is smaller than from whole body exposure. The EDE is defined as the sum of the absorbed dose to the tissue multiplied by a weighting factor, which calculates risk of cancer from partial body irradiation versus whole body irradiation. Units are also rem or sieverts.
Radiation Exposure
Background and Direct Exposures
Exposure to intraoperative radiation is of concern to all members of the surgical team. For perspective, the average yearly exposure of the public to ionizing radiation is about 360 millirems (mrem), of which 300 mrem is from background radiation and 60 mrem from diagnostic radiographs. A chest radiograph exposes the patient to approximately 25 mrem, a hip radiograph to 500 mrem, and a hip CT 1,000 mrem. A regular C-arm exposes the patient to approximately 1,200 to 4,000 mrem/min (lower for extremity and higher for pelvis). These values represent direct exposure. Recommended yearly limits of radiation are 2 to 5 rem (depending on the governing body) to the torso, 15 rem to the eyes, 30 rem to the thyroid, and 50 rem to the extremities (e.g., hands).2 Fetal limits, relevant to fluoroscopy in pregnant patients, are 0.5 rem over 9 months.
Surgeon Exposure
Surgeon and staff may be exposed directly, most commonly to hands in the path of the x-ray beam, or exposed indirectly via scatter.2 Those in close proximity, <36 inches, are at highest risk for exposure.3 Sanders et al., in 1993, found that average exposure of surgeon from scatter during femoral and tibial nailing was 100 mrem per operation.4 At this dose, yearly limits to the eyes (the most sensitive organ not typically shielded) would be reached after 150 cases. Of note, average fluoroscopy time in this study was 6.26 minutes. Similar results were found by Muller et al., in 1998, where average exposure per case to the dominant index finger was 127 mrem for nailing procedures that averaged 4.6 minutes of fluoroscopy.5 A number of other studies6–9 specifically evaluated exposure to the hands. Exposure varied substantially between procedure and surgeon varying from undetectable to 570 mrem per procedure. It should be noted that more recent studies have shown shorter fluoroscopy times for long bone intramedullary nailing than in the aforementioned studies that described exposure doses. Ricci et al.10 found the average time for antegrade femoral nailing to be 153 seconds (range: 16 to 662) when using a piriformis starting point and 95 seconds (range: 20 to 375 seconds) when using a greater trochanteric starting point and in a separate study found the average fluoroscopy time for tibial nailing to be 72.4 or 82.6 seconds depending on the surgery time of day.11 Also, more advanced newer generation C-arm devices reduce the exposure per case.
Team Exposure
Exposures to operating personnel, other than surgeon, are of concern too. Many of these individuals spend more cumulative time being exposed than do surgeons by virtue of being in the OR on a daily basis. A study utilizing simulated pelvic surgery found first assistant exposure (2 feet away) of 6 mrem/min and no detectable exposure at the scrub nurse position (3 feet away) or the anesthesia position (5 feet away).3
Exposure Reduction Strategies
Standard strategies to mitigate exposure to scatter include decreasing exposure time, increasing distance, shielding, and contamination control. Typical shielding techniques used by orthopedic trauma surgeons include use of lead garments that shield the body core. Thyroid shields are a common adjunct. Because eye exposure can be many times higher than central exposure, leaded glasses have been recommended.12 There is little debate that less exposure is better than more, and a number of studies have documented strategies to reduce exposure. Use of a real-time radiation exposure feedback from the Philips DoseAware device decreased radiation exposure by 60%.13
Differences Between Fluoroscopy and Plain Radiography
The fluoroscopic image generally has less contrast and less resolution of fine detail than does a radiographic image. A fundamental difference between plain radiographs and fluoroscopy is that fluoroscopy machines can automatically adjust exposure based on the density of the subject. Exposure for traditional plain radiography is set by the technician and is static. Fluoroscopy units are usually operated in an automatic brightness control (ABC) mode, in which a sensor in the image intensifier monitors the image brightness. When there is inadequate brightness, the ABC increases the kVp first, which increases the x-ray penetration through the patient, and then adjusts the mA to increase the brightness. Thicker soft tissue or larger cross-sectional areas will generate greater exposure to optimize brightness. However, radiodense metallic objects may inadvertently lead to overexposed images when automatic modes are used. Conversely, a field dominated by radiolucent material, such as air in a poorly centered image, will result in an underexposed image (Fig. 1-2).