Virtual Cranio-Maxillofacial Surgery Planning with Stereo Graphics and Haptics



Fig. 4.1
Illustration of the components of our system for virtual surgery planning




Stereoscopic Visualization and Motion Parallax


Stereoscopy, from Greek: stereos (solid) and skopein (watching), is the study of techniques and methods to render and display 3D objects to be perceived through binocular vision. Studies show that in surgical treatment planning and training stereoscopic visualization aids surgeons to accurately orient in spatially complex anatomical regions and helps improving their visuo-spatial positioning [8]. Thus, it is not surprising that 3D stereoscopic visualization was adopted very early in CMF surgery, since it requires highly accurate spatial precision [9]. A variety of techniques have been invented to provide the user’s left and right eye with stereo images, among which auto-stereoscopic displays are a recent technology that does not require the use of headgear. Another class of stereoscopic displays utilizes glasses to provide the correct stereo views, of which shutter-glasses are the most mature technology yielding brilliant colors and high resolutions at a fairly low cost.

Due to the tremendous development of 3D visualization technology, stereo display systems have received much attention, but it is clear from numerous studies that stereoscopic visualization does not in general improve user performance when compared with traditional monoscopic visualization [10]. However, what has been identified to be a highly effective cue for spatial understanding, in particular when combined with stereo is motion parallax [11, 12]. Motion parallax is the apparent dynamic displacement of objects in the view of the observer due to observer motion and requires accurate and robust tracking of the observer’s viewing position when used in 3D visualization.

There are very few studies of working environments combining stereo, motion parallax and haptics [13]. We have opted to create a system that gives surgeons the visualization cues that they use in real surgery: binocular vision, parallax from self motion, and parallax from object motion, and that at the same time co-locates the visual and haptic workspace.


Haptics


Haptics, from Greek: haptesthai (to contact, to touch), encompasses both hardware devices and rendering algorithms that mediate the human sense of touch. In medicine, haptics has been used primarily in simulators for procedures such as palpation, laparoscopic surgery, and needle insertion. Haptics increases the realism and immersion in such simulators, as medical professionals rely on their sense of touch in these procedures. Coles et al. [14] provide a broad survey of the use of haptics in surgery training simulators. Pre-operative planning systems based on anatomical models from patient-specific CT and/or MRI data may also benefit from haptics. The addition of haptic interaction to a planning system lets the user not only view the relevant patient-specific anatomy, but also touch and manipulate virtual representations of, for example, bone structures.

A common type of haptic device is an input/output interface with which a user interacts by grasping and manipulating a handle, or end-effector, attached to a mechanical linkage. The device contains sensors that continuously track motion and/or forces applied to the handle, as well as actuators to generate force and/or torque feedback to the user. Haptic devices thus provide a bi-directional channel between the user and a virtual environment.

Haptic devices vary greatly in workspace size, number of degrees-of-freedom (DOF) and degrees-of-force-feedback (DOFF), maximum force and torque, and price. The target application determines the requirements for a haptic device, which often become a trade-off between force, fidelity, DOF, and cost. For example, if a high maximum force is important, strong mechanical linkages and actuators are required, which typically increase the inertia of the interface. An example of a commercially available haptic device is the Phantom Omni from Geomagic3 (Fig. 4.2), which tracks the handle position and orientation in six DOF and provides three DOFF. It offers a good compromise in terms of force, fidelity, and cost, for our application.

A321277_1_En_4_Fig2_HTML.gif


Fig. 4.2
The planning system hardware as seen from above (left) and from the side (right). The graphical objects are displayed by the monitor (a) and reflected on the half-transparent mirror (b). The user manipulates the 3D graphical objects with the haptic device (c) under the half-transparent mirror. The push-buttons on the 3D-Connexion controller (d) activate the grouping tool and the tool Snap-to-fit. The two infra-red cameras (e) track the marker rig on the shutter glasses (f) for user look-around

Haptic rendering algorithms compute context-dependent, real-time, force-feedback to the user when interacting with a virtual environment. The force-feedback could for example consist of contact forces when the user is in contact with a virtual object, that is the haptic device then generates forces stopping the users hand, or guiding forces that assist the user in performing specific tasks. Realistic force feedback requires a high update rate, as much as 1 kHz, which can be compared to 30–60 frames per second required for smooth graphical update. This poses great demands on haptic rendering software that requires a combination of highly optimized algorithms, carefully designed data structures, and pre-computation, in addition to large computational power.



System Overview


Our system for CMF surgery planning, which combines stereoscopic 3D visualization with six DOF haptic rendering, is shown in Fig. 4.2. The system executes on an HP Z400 Workstation with an Nvidia Quadro 4000 Graphics Processing Unit (GPU) driving a Samsung 120 Hz time-multiplexed stereo monitor. The monitor with stereo glasses gives the user a stereoscopic view of the patient CT data, and the Geomagic Omni haptic device, positioned under the mirror, allows interaction. The monitor is mounted on a half-transparent mirror rig, a Display 300 from Sense-Graphics,4 which makes it possible to co-locate the stereo graphics and the haptic device workspace. This way, the user can see and interact with objects in the same physical space. A head tracker, based on a NaturalPoint Optitrack system,5 continually tracks optical markers mounted on the stereo glasses. This enables look-around, that is the ability to view objects from different angles, by using head motion.


System Components


In this section, we describe the components in our system. To make a restoration plan, the surgeon follows the workflow chart in Fig. 4.1, which illustrates the order in which the components are used.


Bone Segmentation


The first step in our CMF surgery planning pipeline is to segment bones and bone fragments in the skull. A collective bone segmentation can be obtained by thresholding the gray-scale CT volume at a Hounsfield value corresponding to bone tissue. However, to plan the reconstruction of complex bone fractures, we also need to separate individual bone fragments from each other. Due to noise and partial volume effects, adjacent bones in the CT volume might become connected to each other after the thresholding and cannot be separated by simple connected component analysis.

To separate the bones, we have developed an interactive bone separation method based on the random walks algorithm [15]. Similarly to [16], we construct a weighted graph from a binary bone segmentation and then use random walks and user-defined seeds to separate the individual bone fragments. For each foreground voxel, we compute the probability that a random walker starting from that voxel will encounter a particular seed label before any other seed label. The vertices in the graph represent the foreground voxels and the edges represent the connections between adjacent foreground voxels in a 6-connected neighborhood. Each edge is assigned a weight based on the absolute difference between the intensities of the image elements corresponding to the vertices spanned by the edge.

To simplify marking of individual bone fragments, we provide a brush tool that allows the user to paint seeds directly on the bone surfaces. The bones are rendered using GPU-accelerated ray-casting [17]. We represent the random walks segmentation problem as a sparse linear system [15] and use an algebraic multigrid solver [18] to solve the system. Figure 4.3 illustrates the segmentation process.

A321277_1_En_4_Fig3_HTML.gif


Fig. 4.3
Random walks bone separation. (a) Initial bone segmentation obtained by thresholding. (b) User-defined seeds painted directly on the bone surfaces. (c) Bone separation result obtained with random walks

Using this method, segmentation results comparable with manual segmentations can typically be obtained in a few minutes. Minor leaks or artifacts in the segmentation result can be corrected by manual slice-by-slice editing. The segmented bone fragments can then be loaded into the virtual planning system.


Bone-Puzzle


One fundamental task in CMF surgery is to restore the skeletal anatomy in patients with extensive fractures of the facial skeleton and jaws, a task that in complex cases resembles solving a 3D puzzle. The accuracy requirements are high; small errors in the positioning of each fragment may accumulate and result in inadequate reconstruction.

Our system supports planning of the restoration of skeletal anatomy using a virtual model, segmented from volumetric CT data as described in the previous section, in which independent bone fragments are labeled and visualized in a unique color for clear identification [19]. See Fig. 4.4. The user may touch, move, and rotate the individual bone fragments with the haptic device, or move and rotate the entire working volume to view it from different angles. During fragment manipulation, force and torque from contacts with other fragments are rendered haptically, giving the user an impression similar to that of manipulating a real, physical object among other objects. As penetration between fragments may be difficult to discern visually, contact forces help the user to avoid impossible placement of fragments during the planning. When two or more fragments have been positioned relative to one another, the user may group them and manipulate them as one unit. Additional fragments may subsequently be attached to extend the group and they may also be detached from the group.

A321277_1_En_4_Fig4_HTML.gif


Fig. 4.4
Virtual reconstruction of a mandible. Each individual bone fragment is given a unique color (left). When the haptic cursor is held close to a bone fragment, it is highlighted (middle) and the user can then grasp and manipulate it with the six DOF haptic handle (right)

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Sep 24, 2016 | Posted by in MUSCULOSKELETAL MEDICINE | Comments Off on Virtual Cranio-Maxillofacial Surgery Planning with Stereo Graphics and Haptics

Full access? Get Clinical Tree

Get Clinical Tree app for offline access