Optical tracking
Electromagnetic tracking
Advantages
• Accuracy ~1–0.1 mm
• Does not depend on objects in its environment
• Large range (several meters)
• Wireless position markers
• Can track without line of sight (inside body)
• Position sensors can be small to fit in needles and catheters (~0.5 mm)
Disadvantages
• Requires line of sight
• Optical markers are relatively large
• Accuracy ~1–2 mm
• Limited range (typically 20–60 cm)
• Affected by ferromagnetic metals in its environment
• Wired position sensors
Tracking the ultrasound transducer expands the possibilities in ultrasound-guided needle interventions. By attaching a position tracker to the ultrasound transducer and the needle, their relative positions can be computed and visualized, even when the needle is not in the ultrasound imaging plane. Such a tracked system can be further enhanced by attaching another position sensor to the patient. This allows visualization of the needle not only relative to the ultrasound image, but relative to pre-procedural CT, MRI, or other models of patient anatomy.
There are other technologies for needle tracking in ultrasound-guided interventions beyond optical and electromagnetic. The most simple and oldest way is mechanical tracking is to attach a passive needle guide to the ultrasound transducer. Ultrasound guidance methods for abdominal interventions use mechanical needle guides, but they constrain the needle motion to a single line relative to the ultrasound imaging plane. This line is displayed on the ultrasound display, so the operators see where the needle will be inserted relative to the image. The needle target can be chosen by moving the transducer with the fixed needle guide. But in the spine, the target areas are only visible from a limited range of angles. And the needle usually has to go through a narrow space. Therefore, spinal interventions require more freedom of motion of both the transducer and the needle, so mechanical needle guides are typically not suitable for these procedures. Optical and electromagnetic position tracking, however, allows any position and angle of the needle relative to the ultrasound transducer. Using the tracked position information, navigation software can display the needle relative to the ultrasound image in real time.
5 Hardware Components
Experimental tracked ultrasound systems have been studied for over a decade in spinal needle guidance applications. But the first products approved for clinical use only appeared recently on the market. In this section we describe the architecture of tracked ultrasound systems in general, and how research prototypes can be built from low-cost components.
Tracked ultrasound hardware systems are composed of a conventional ultrasound machine and an added position tracker. In an experimental setting, there is often a dedicated computer for tracked ultrasound data processing, because ultrasound machines either restrict installation of research software or their hardware is not powerful enough for running additional applications. We will discuss a system design with a dedicated computer for our research application, because it can be easily built from existing components in any research laboratory (Fig. 1).
Fig. 1
Schematic layout of tracked ultrasound systems using electromagnetic (EM) position trackers
The majority of tracked ultrasound systems use electromagnetic technology for position tracking. Although optical tracking can also be used, the line of sight often breaks when the transducer is moved around the patient. This causes loss of tracking signal, which is inconvenient for the operating staff. Electromagnetic trackers do not need line of sight, and—if the field generator can be placed close enough to the operating region—it is usually accurate enough.
When choosing an ultrasound machine for a tracked ultrasound system, we should first consider systems that are already integrated with position tracking, and have research interface that provides real-time access to the ultrasound image and tracking data streams. If tracking is not already available in the chosen ultrasound machine, an external tracker needs to be attached to the transducer. Even if the ultrasound machine does not offer digital access to the images and imaging parameters, most ultrasound machines have a standard video output that can be tapped into using a video grabber device.
Fixing the tracking sensor on the ultrasound transducer is not difficult using glue or a rigid clip. If sterile environment is needed, the transducer along with the sensor can be placed in a sterile bag. The reference position sensor needs to be fixed to the patient as rigidly as possible. Since the reference sensor provides the link between the patient and the navigation coordinate system, it makes the system more convenient to use if anatomical directions are marked on the reference sensor, so it can be placed in the same orientation. A reference sensor holder can provide the anatomical markers, along with an interface that can be firmly attached to the skin using an adhesive sheet (Fig. 2). Tracking the needle is the most challenging task, especially if the needle is thin (smaller than about 17 Ga) and bends during insertion. A larger, more accurate sensor can be clipped to the needle using a disposable plastic interface. But when the needle bends, a clipped sensor at the hub will not give accurate information on the tip position. Smaller sensors can be integrated in the needle stylet to provide direct tip tracking. Some companies offer electromagnetically tracked stylets approved for clinical use. However, such small sensors have a very limited (around 200 mm) usable range around the field generator, which can make the system hard to set up around the patient.
Fig. 2
Reference sensor holder
6 System Calibration
Ultrasound imaging differs significantly from other imaging modalities traditionally used in image-guided interventions. Both the contents and the positions of ultrasound images change rapidly in time, while CT and MRI images have static content and well-defined positions. Therefore ultrasound tracking requires special practices to ensure a maintainable navigation software design. We describe the coordinate systems that need to be represented in tracked ultrasound systems, and best practices in finding the transformations between the coordinate systems. In other words, we discuss calibration between components of the system.
In a full featured navigation system, there are three dynamic and three static coordinate transformations (Fig. 3). The dynamic transformations are shown in orange color, and the static ones in blue. The dynamic transformations change rapidly as the tracking sensors move relative to the Tracker coordinate system. The Tracker coordinate system is most commonly the electromagnetic field generator. The static transformations are equally important, but they do not change significantly during the intervention.
Fig. 3
Coordinate systems and transformations in a tracked ultrasound-guided needle navigation scene
All transformation chains eventually end in a common Right-Anterior-Superior (RAS) anatomical coordinate system. When a CT or MRI image is loaded in the needle navigation scene, their RAS coordinate system is used. In ultrasound-only cases, the RAS can be defined at an arbitrary position with the coordinate axes directions matching the patient anatomical directions.
Spatial calibration of the system entails the computation of the static transformations. Reference to RAS transform is typically obtained by landmark registration. In this method the transform is determined by minimizing the difference between points defined in the pre-procedural CT or MRI image and the same points marked on tracked ultrasound images. The method is very simple, the computation is immediate, and usually accurate enough, but finding the corresponding anatomical locations on different imaging modalities requires experience. Although there have been promising attempts to automate this process by image-based registration. Automatic methods may require less skills from the users and might be more accurate (by matching large number of points or surface patches), but so far these methods do not seem to be able to match the speed, simplicity, and robustness of the manual registration method.
Computation of the NeedleTip to Needle transform is straightforward, typically performed using a simple pivot calibration. The tracked needle is pivoted around its tip for a couple of seconds and the transform that minimizes the dislocation of the needle tip is computed. Usually the calibration has to be performed only once for each needle type that may be used in the procedure.
Determining the Image to Transducer transform (also known as probe calibration) accurately is a difficult task, mostly because of the 3D point localization by ultrasound is inherently inaccurate, due to the “thickness” of the ultrasound beam (Fig. 4). Beam width causes objects to appear in the ultrasound image that are several millimeters away from the ideal imaging plane and blurring of object boundaries on the images.
Fig. 4
Anything inside the thick ultrasound beam will appear in the acquired ultrasound image
The Image to Transducer transform can be determined by moving a tracked pointing device (such as a needle or stylus) to various points in the image and recording the pointer tip position in both the Transducer coordinate system and the Image coordinate system (Fig. 5). The transform can be determined by a simple landmark registration. The advantages of the method are that it is simple, reliable, requires just an additional tracked stylus, and can be performed in any medium where a needle can be inserted. However, positioning the pointing device’s tip in the middle of the image plane and finding the tip position in the image requires an experienced operator and therefore the accuracy and speed of the calibration heavily depends on the operator.
Fig. 5
Spatial calibration of the transducer can be performed by recording the pointer tip position in the Transducer coordinate system and marking them in the image
Automatic methods have been proposed to reduce the operator-dependency and increase the accuracy of the probe calibration. These methods extract features (such as intersection points or lines) from the image automatically, then compute the transform that minimizes the difference between the expected and the measured positions of the features.
Intersection of a thin linear object (such as a wire or needle) and the image plane show up on the image clearly, as a bright spot. Automatic detection of small bright spots in an image is a relatively simple task and the position of the spot usually can be determined very accurately, therefore many calibration phantoms contain a number of wires at known positions. A particularly interesting setup is when wires are arranged in multiple N-shaped patterns (Fig. 6), because if the wire positions are known in 3D and the relative distances of the intersection points in the image are known in 2D, the position of the middle wire intersection can be computed in 3D [2]. Arranging wires in planes have the additional advantage that the intersection points in the image are collinear, which can be used for automatically rejecting bright spots in the image that do not correspond to an actual wire intersection point (Fig. 7). Having 3 N-shaped wire pattern is shown to be enough to reach submillimeter calibration accuracy [2]. Fully automatic, open-source implementation of the N-wire-based probe calibration is available in the Plus toolkit [3]. The advantage of the method is that is fully automatic, therefore a large number of calibration points can be collected and so the effect of random errors can be reduced, the results not depend much on the operator, and the calibration can be completed within a few minutes. The disadvantage of the method is that it requires measurement of the wire positions in the tracker coordinate system (typically by landmark registration of the calibration phantom), requires phantom fabrication, and attention has to be paid to set imaging parameters that allow accurate automatic detection of the wire intersections.
Fig. 6
Calibration phantom containing 3 N-wires. 3D-printing-ready CAD model, instructions, and calibration software are all available in the Plus toolkit [3]
Fig. 7
Ultrasound image of the calibration phantom containing 3 N-wires with an overlay showing the results of the automatic marker detection algorithm
Other automatic methods have been proposed that use a simpler calibration phantom. For example, it is possible to compute the probe calibration just by imaging a flat surface while completing certain motion patterns with the transducer. This method is called single–wall calibration. The advantage of the method that it just require a simple flat diffusively reflecting surface as calibration phantom, however the method is not very robust and can provide very inaccurate results if the motion patterns are not completed carefully or not optimal imaging parameters are used.
The ultrasound imaging system is typically only loosely coupled to the position tracking system and there can be temporal misalignments between tracking and imaging data that is recorded at the same time. The goal of temporal calibration is to detect and compensate such temporal misalignments. Accurate temporal calibration is needed when images are acquired while moving the transducer. High accuracy and reliability is achievable using hardware triggers. If hardware-based synchronization is not available but the acquisition rate and latency is constant in both the imaging and tracking device then software-based method can be used to compute the fixed time offset. Methods based on detecting certain events (such as sudden motion) have been proposed. These methods are easy to implement, but inaccurate or require lengthy data acquisition, because acquisition of a single measurement sample takes a few seconds. Correlation-based methods require the operator to perform quasi-periodic motion with the transducer for a few seconds and during this time imaging and tracking data is recorded (Fig. 8). Then position signal is extracted from the data and the time offset is computed that results in the highest correlation value between the position signals (Fig. 9). Position signal from the 3D pose information can be computed as position along the first principal axis of the motion. Position signal from the image data can be retrieved by detecting the position of a feature (such as the bottom of the water tank) and use the position along a chosen axis. The correlation-based temporal calibration method is accurate, reliable, and a free, open-source implementation is available in the Plus toolkit [3].
Fig. 8
Moving the transducer up/down repeatedly for acquiring tracking and imaging data for temporal calibration (left). Position of the water tank bottom is automatically detected in the ultrasound image and used as position signal for the image data. Position of the water tank bottom is shown for the top and bottom positions (center, right)
Fig. 9
Without temporal calibration the video and tracking data are misaligned (top). Temporal calibration minimizes the misalignment (bottom)
7 Volume Reconstruction of Tracked Ultrasound
Position of recorded ultrasound images can be used to reconstruct 3-dimensional ultrasound volumes. Reconstructed volume data can be in the same format as other volumetric images (CT or MRI), but the intensity values of voxels still highly depend on the direction of sound propagation. Therefore, processing and visualization of such volumetric images are difficult. Intensity values in ultrasound are not characteristic to tissue types, and are often attributed to artifacts (including scatter and shadow), rather than anatomical structures. Image quality and parameters also depend on the settings of the ultrasound scanner, the size of the patient, and motion patterns of the transducer during image recording.
Reconstructed image volumes are often used in cross-modality image registration for fusion of ultrasound with pre-procedural CT or MRI images. These promising applications are still in research phase, but they may have a significant role in clinical practice in the future, as they combine the excellent tissue visualization features of other modalities with the safety, portability, and accessibility of ultrasound.
The quality of reconstructed ultrasound volumes depend on many factors, including the quality of the input images, calibration accuracy of the transducer tracker, the accuracy of temporal synchronization between image acquisition and position tracking, and the algorithms applied for filling voxels in the reconstructed volume where a recorded image is not available. Fortunately, there are a number of open-source implementations for ultrasound volume reconstruction algorithms.