mm.
1 Introduction
In spinal surgery today, many procedures are performed with no or only minimal image guidance. Preoperative computed tomography (CT) or magnetic resonance (MR) images are used for diagnosis and planning, but during surgery, two-dimensional C-arm fluoroscopy is widely used both for initial detection of the correct spinal level and for intra-operative imaging. Navigation systems exist, but mainly for placement of pedicle screws. These usually first come to use when the bone surface has been exposed. Using a simple landmark or surface registration method the preoperative CT image is then aligned with the patient and can be used for planning and guidance of the screws. A number of groups have evaluated the use of navigation for this purpose, and a review of the topic was presented by Tjardes et al. [12]. They conclude that the benefits of image-guidance in terms of accurate placement of the screws and reduced exposure to ionizing radiation have been proven, in particular for the cervical and lumbar procedures. In other areas of spine surgery, navigation and image guidance are still on the experimental stage.
One of the main limitations of today’s navigation systems for spine surgery is that they often are not available until after the bone surface has been exposed. The use of ultrasound has been proposed to overcome this limitation. By registering preoperative images to intraoperative percutaneous ultrasound images, navigation can start before incision and therefore be used for both level detection and planning at an early stage of the procedure. Thus, the use of X-ray fluoroscopy can possibly be reduced.
In order to make a navigation system based on intraoperative ultrasound clinically useful, the greatest challenge is to achieve accurate and robust registration between the preoperative images and the ultrasound images with minimal user interaction. Registration of CT images of the spine to corresponding ultrasound images has been investigated by several groups, and two main approaches have been explored: feature-based registration and intensity-based registration. In the first case, corresponding features are extracted from the two datasets to be registered prior to registration. In the case of spine surgery, the feature of choice is the bone surface as this is the only feature that can be reliably detected in the ultrasound images. Segmentation of the bone surface from ultrasound images of the spine is still a challenging topic due to noise, artifacts and difficulties in imaging surfaces parallel to the ultrasound beam. A few methods have been described in the literature, ranging from simple ray tracing techniques [15] to more advanced methods based on probability measures [4, 7, 9] or phase symmetry [13]. Following surface extraction, the segmented bone surfaces are registered using the Iterative Closest Point (ICP) algorithm [2] or the unscented Kalman filter [9].
In intensity-based registration, a similarity metric based on the image intensities is optimized to find the spatial transformation that best maps one image onto the other [6, 8, 14, 15]. As MR/CT and ultrasound images present very different intensity and noise characteristics, a common approach is to create simulated ultrasound images from the pre-operative data and register the simulated image to the real ultrasound image. In these simulations, the direction of sound wave propagation, transmission, reflection and noise can be modelled in order to obtain images that can be reliably registered to real ultrasound images based on image intensities.
While these studies show a lot of promise, they focus almost exclusively on the registration of preoperative CT images. However, many spinal procedures, such as the treatment of disc herniations and intraspinal tumours, rely on the soft-tissue imaging capabilities of MR. Thus, by combining ultrasound imaging with preoperative MR, navigation could be extended to a variety of spinal procedures that do not benefit from image guidance today. In these procedures, the ultrasound could also be used for intraoperative imaging, reducing the use of fluoroscopy even further. As a first step towards this end, we present a method for registration of preoperative MR images to percutaneous ultrasound images of the spine, including a preliminary assessment of its performance.
2 Methods and Experiments
Our registration method is feature-based and consists of two steps: First, the bone surfaces are segmented from both the ultrasound images and the MR images, and then the two surfaces are registered using a modified version of the ICP algorithm.
2.1 Ultrasound Acquisition and Segmentation
The ultrasound images were acquired using a Vivid E9 scanner with an 11 MHz linear probe (GE Healthcare, Little Chalfont, UK). Some groups have used lower frequencies, which enable good imaging of deeper structures such as the transverse processes of the spine [6, 9, 13–15]. However, this makes imaging of superficial structures, such as the spinous processes and the sacrum, challenging. As these structures represent important features for the registration algorithm, we found that a relatively high frequency gave a better compromise between depth penetration and resolution. The ultrasound probe was tracked with the Polaris optical tracking system (NDI, Waterloo, ON, Canada), and both images and corresponding tracking data were recorded using the navigation system CustusX [1] with a digital interface to both the ultrasound scanner and the tracking system. The two-dimensional ultrasound images were also reconstructed to a three-dimensional volume using the Pixel Nearest Neighbor (PNN) reconstruction algorithm [11].
While the reconstructed, three-dimensional ultrasound volume is useful for navigation, the reconstruction process tends to introduce a certain blurring. The volume usually also has a lower resolution than the original, two-dimensional ultrasound images. We therefore used the latter as input to our segmentation method. In order to extract the bone surfaces from these images, we used a combination of the bone probability maps introduced by Jain et al. [7] and Foroughi et al. [4], and the backward scan line tracing presented by Yan et al. [15]. In ultrasound images, reflections from bone surfaces are seen as bright ridges perpendicular to the ultrasound beam. To calculate the probability of each pixel of the image being part of such a ridge, the image was smoothed with a Gaussian filter, before calculating the Laplacian of Gaussian (LoG), i.e.
where and are the convolution kernels of the Gaussian filter and the LoG filter respectively. This is a common operation in blob detection and usually produces a strong positive response for dark blobs and a strong negative response for bright blobs. To enhance the bright reflections, the positive values were therefore set to zero before taking the absolute value of the rest. The result was then added to the smoothed version of the original image to produce an initial bone probability map , i.e.
The other feature to be considered was the intensity profile in the propagation direction of the ultrasound. For a bone surface, this is typically characterized by a sudden, sharp peak followed by a dark shadow. To calculate the probability of a given pixel representing the maximum of such a profile, each scan line was considered separately. Assuming is the th pixel of the initial bone probability map along a given scan line, the secondary bone probability of this pixel was found as
where is the width of a typical intensity peak and is the length of a typical bone shadow, both given in pixels. In our case, these were set to and , which corresponds to 1.5 and 20 mm respectively. is a weight that can be adjusted according to the overall noise level of the bone shadows in the image, and in our case this was set to .
(1)
(2)
(3)