Surface-Imaging-Based Patient Positioning in Radiation Therapy



Fig. 10.1
Illustration of the applications of surface-imaging systems in radiation therapy in the estimation of (a) inter-fractional and (b) intra-fractional setup errors



The main advantages of surface-imaging systems are (1) non-ionization and (2) real-time acquisition. Intuitively, the non-ionizing approaches can be considered to be the most important motivation in the employment of surface-imaging systems in radiation therapy. The conventional radiographic imaging-based methods provide information about internal body structures and thus enable an accurate target-based positioning. However, the additional radiation dose that the patient receives during the CT scan is inevitable.

The current techniques for patient positioning using surface imaging can be divided in terms of the imaging principle into techniques based on projected light patterns, techniques based on laser scanning and techniques based on infrared ray time-of-flight (TOF) imaging. One might think of ultrasound (US) imaging as a feasible technique for patient positioning because it can produce three-dimensional (3D) images of body structures without the use of ionizing radiation. However, despite its advantages, US imaging is limited by its low image quality, as well as the fact that it deforms the surface and internal structures as a result of the pressure of the probe. Therefore, US imaging is not discussed in this chapter.

Surface-imaging systems are generally ‘marker-less’ because they do not require reflective markers (which are used in conventional approaches) to be attached to the patient’s surface (Wang et al. 2001, Meeks et al. 2005, Wagner et al. 2007, Yoshitake et al. 2008). Instead of using the reflective markers, the patient’s surface is illuminated with light rays (e.g. low-energy infrared laser rays) from a light source installed on an imaging unit, and the rays reflected from the patient’s surface are detected by cameras installed on the same unit. Measurements of the detected rays are used to generate a 3D surface image – i.e. a 3D point distribution of the patient’s surface. The main advantage of the surface image is that it represents the topography of the patient’s surface, which enables us to estimate the positioning errors using image registration algorithms.

In this chapter, we introduce the theoretical aspects of surface image registration and analysis techniques, which are useful for patient positioning in radiation therapy. We explain fundamental approaches for mathematical reconstruction of the patient’s surface by using non-uniform rational B-spline (NURBS) modelling. Besides, we introduce an approach for analysing the topography of the patient’s surface based on differential geometry in order to localize anatomical feature points on the patient’s surface. Finally, we explain the concept of an iterative closest point (ICP) algorithm, which is a widely adopted algorithm in the estimation of patient positioning errors.



10.2 Surface-Imaging-Based Patient Positioning


A surface-image-based patient positioning and motion detection system consists of two main components: a surface-imaging device and a surface-image processing unit (e.g. a personal computer). During the radiation treatment time, the imaging device captures images of the patient’s surface, which are fed to the personal computer. Next, these treatment images are preprocessed in order to reduce spatial and temporal noise, such as that caused by the imaging circuitry. In addition, optical distortions, which are caused by the optical components of the imaging device, are corrected using pre-calculated calibration parameters. At the same time, reference images are retrieved from a database or a storage device. In the context of this chapter, the reference image is a surface image that can be acquired either at the beginning of the treatment session or at the planning phase. Another option is to obtain the reference image from the planning CT image. The moving image, on the other hand, is the surface image acquired during treatment. Since the surface image is the essential element in this computational pipeline, we start with its mathematical definition.


10.2.1 Definition of Surface Image


A surface image can be expressed by using a position vector located in a Cartesian coordinate system. Let a position vector be 
$$ \boldsymbol{p}=\left( x, y, z\left( x, y\right)\right)\in {\mathbb{R}}^3 $$
. Thus, a surface image I consisting of N position vectors on the patient’s surface can be expressed as follows (Colombo et al. 2006):



$$ \boldsymbol{I}=\left[\begin{array}{ccc}{x}_1& {y}_1& {z}_1\left({x}_1,{y}_1\right)\\ {}\vdots & \vdots & \vdots \\ {}{x}_N& {y}_N& {z}_N\left({x}_N,{y}_N\right)\end{array}\right]. $$

(10.1)
This vector-based representation of the surface image enables the implementation of shape-modelling techniques and image registration algorithms that can be used to calculate patient positioning errors using both rigid and nonrigid transformations. A rigid transformation entails a translation vector and a rotation matrix, whereas a nonrigid transformation includes scaling and nonlinear deformations represented as pointwise displacement vectors. For simplicity, we will hereafter refer to a position vector representing a point on a surface as a point. Next, we introduce basic technologies for the acquisition of surface images.


10.2.2 Surface Imaging Based on Projected Light Pattern


Surface-imaging systems based on projected light pattern (e.g. AlignRT, Vision RT, Ltd., UK) have been widely used to monitor of patient’s positioning. Such systems consist of two imaging units (pods) suspended at the ceiling of the treatment room.

In surface-imaging systems based on projected light pattern, the patient’s surface image is acquired based on the stereovision imaging principle (Bert et al. 2005). In stereovision imaging, an epipolar geometrical model is used for computing the 3D coordinates of a point on the object’s surface. This model is based on the cameras’ internal parameters and the relative position of the point in two images captured from different viewpoints. Therefore, this principle requires a correspondence between the pixels of the object’s surface points in the acquired images. As the patient’s skin might not include sufficient information for establishing this correspondence, a speckle pattern is projected on the surface during the acquisition of the stereo images, and the pattern is detected in the images in order to estimate the required correspondence.


10.2.3 Surface Imaging Based on Laser Scanning


Surface-imaging systems based on laser scanning have also been developed in order to monitor patient positioning. Typically, a laser scanner consists of a laser source, a mirror attached to a motor, and a camera. A laser fan beam is swept over the patient’s surface by changing the mirror angle using the motor. With each sweep of the laser beam, the camera captures an image of the reflected laser light over the patient’s surface. The surface image is then reconstructed by estimating the distance between the camera and the object based on a triangulation principle (Brahme et al. 2008).


10.2.4 Infrared Ray-Based Time-of-Flight Camera


Recently, low-cost range (distance) imaging systems (e.g. TOF cameras) have been suggested for use in monitoring patient positioning in radiation therapy (Placht et al. 2012, Bauer et al. 2013). The TOF camera consists of infrared light-emitting diodes (IR-LEDs), which irradiate the surface of the patient’s body with infrared rays, and an imaging sensor that receives the reflected rays.

The TOF camera produces surface images based on measurements of the distance between the object and the camera. The distance is measured by calculating the phase shift between the irradiated and reflected infrared light rays over the patient’s surface according to the following equation:



$$ d=\frac{c}{4\pi {f}_{\mod{}}}\varphi, $$

(10.2)
where d is the estimated distance, φ is the phase shift, c ≈ 3 × 108 m/s is the speed of light and f mod is the modulation frequency of irradiated rays (Büttgen and Seitz 2008).

Figure 10.2 shows an example of a surface image acquired by using a TOF camera. Figure 10.2a shows a picture of an anthropomorphic head phantom, and Fig. 10.2b shows a surface image of the phantom acquired by a TOF camera (CamCube 3.0, PMD Technologies, Siegen, Germany). The colours represent the distance between the camera and phantom’s surface.

A340376_1_En_10_Fig2_HTML.gif


Fig. 10.2
Example of a surface image acquired by using a time-of-flight (TOF) camera: (a) anthropomorphic head phantom. (b) Surface image of the head phantom acquired by a TOF camera. Red points belong to closer regions, whereas black points belong to further regions


10.3 Mathematical Reconstruction of Patient’s Surface Using NURBS


The mathematical reconstruction of the patient’s surface is a useful technique for compensating for the sparsity and discontinuity in surface images. More specifically, the sparsity in the surface image originates from the low spatial resolution of the imaging sensor (number of pixels in the image) and distance between the camera and the object. In addition, owing to the noise produced by imaging circuitry and reflectivity characteristics of the surface, the smoothness of the acquired surface image may become deteriorated. Such limitations affect the quality of the acquired image by creating outliers and/or changing the topographic attributes of the surface image. Consequently, image analysis approaches based on differential vectors and image registration techniques (explained later) are frustrated by the appearance of outliers and/or changes in the topographic attributes. By using an appropriate reconstruction technique, it is possible to obtain dense and smooth surfaces (i.e. continuously differentiable surfaces) that improve the outcome of the aforementioned processes. However, first, we must define the principle of surface parameterization, as it is essential for the following topics.


10.3.1 Parameterized Surface


Let S be a surface consisting of points 
$$ \boldsymbol{p}\in {\mathbb{R}}^3 $$
, as shown in Fig. 10.3. A parameterization of the surface is a map
$$ \boldsymbol{S}:{\mathbb{R}}^2\to {\mathbb{R}}^3 $$
. In other words, it can be obtained by assigning two values of parametric variables 
$$ \left( u, v\right)\in {\mathbb{R}}^2 $$
to each point 
$$ \boldsymbol{p}\in {\mathbb{R}}^3 $$
in the surface. Thus, the parameterization of the surface S can be expressed as follows:



$$ \boldsymbol{p}\left( u, v\right)=\boldsymbol{p}\left( x\left( u, v\right), y\left( u, v\right), z\left( x\left( u, v\right), y\left( u, v\right)\right)\right). $$

(10.3)


A340376_1_En_10_Fig3_HTML.gif


Fig. 10.3
A surface in a Cartesian coordinate system parameterized using two parametric variables u and v


10.3.2 NURBS Surface Reconstruction


The basic idea of NURBS modelling is to calculate a smooth approximation for the position of a query point 
$$ \widehat{\boldsymbol{S}}\left( u, v\right) $$
on the object’s surface by using a set of neighbouring points derived from the original image, which are called control points, and n-degree piecewise B-spline functions that determine the influence of each control point on the query point’s position. Given control points p i , j , the query point 
$$ \widehat{\boldsymbol{S}}\left( u, v\right) $$
can be calculated using the following equation:



$$ \hat{\boldsymbol{S}}\left( u, v\right)=\frac{\sum_{i=0}^n\sum_{j=0}^m{N}_{i, c}(u){N}_{j, d}(v){w}_{i, j}{\boldsymbol{p}}_{i, j}}{\sum_{i=0}^n\sum_{j=0}^m{N}_{i, c}(u){N}_{j, d}(v){w}_{i, j}}, $$

(10.4)
where 
$$ u, v\mathbb{\in}\mathbb{R} $$
are the parametric variables; m + 1 and n + 1 are the number of the control points in the u and v directions, respectively; w is a weighting factor; and N i , c (u) and N j , d (v) are the basis functions of degrees c and d in the u and v directions, respectively (Piegl and Tiller 1997). The numerator of Eq. (10.4) can be seen as a locally weighted summation of the control points, whereas the denominator can be seen as a normalization term.

The degree of the basis function is an important factor in determining the shape of the obtained surface. Cubic functions – for example – produce smoother surfaces than linear or quadratic functions. The basis functions of the parametric variable u can be calculated by using the following Cox-de boor recursive formulas (Cox 1972, de Boor 1972):



$$ {N}_{i,0}( u)=\Big\{\begin{array}{c}1\kern2.5em if\kern.5em {u}_i\le u<{u}_{i+1}\\ {}0\kern4em \mathrm{otherwise}\end{array}\kern1.5em ,\operatorname{} $$

(10.5)




$$ {N}_{i, c}(u)=\frac{u-{u}_i}{u_{i+ c}-{u}_i}{N}_{i, c-1}(u)+\frac{u_{i+ c+1}- u}{u_{i+ c+1}-{u}_{i+1}}{N}_{i+1, c-1}(u). $$

(10.6)
Similarly, the basis functions of the parametric variable v can be computed as



$$ {N}_{j,0}( v)=\Big\{\begin{array}{c}1\kern1.5em if\kern0.5em {v}_j\le v<{v}_{j+1}\\ {}0\kern3em \mathrm{otherwise}\end{array}\kern1.25em ,\operatorname{} $$

(10.7)




$$ {N}_{j, d}(v)=\frac{v-{v}_j}{v_{j+ d}-{v}_j}{N}_{j, d-1}(v)+\frac{v_{j+ d+1}- v}{v_{j+ d+1}-{v}_{j+1}}{N}_{j+1, d-1}(v). $$

(10.8)

For simplicity, we show examples for one-dimensional B-spline functions and NURBS curves. Figure 10.4 illustrates B-spline functions of the 1st (linear), 2nd (quadratic) and 3rd (cubic) degrees, which were calculated for five control points with respect to the parametric variable u.

A340376_1_En_10_Fig4_HTML.gif


Fig. 10.4
Basis functions calculated for five control points with respect to the parametric variable u: (a) 1st (linear), (b) 2nd (quadratic) and (c) 3rd (cubic) degree functions

Figure 10.5 illustrates the effect that the degree of the basis functions has upon the shape of a reconstructed NURBS curve c(u). In Fig. 10.5, linear basis functions were used for reconstructing the curve c 1, whereas quadratic and cubic functions were used for reconstructing the curves c 2 and c 3, respectively. The smoothest curve was c 3, which shows the advantage of using cubic basis functions.

A340376_1_En_10_Fig5_HTML.gif


Fig. 10.5
Effect of degree of the basis functions on the smoothness of the reconstructed NURBS curves

The elements u i and v j in Eqs. (10.5, 10.6, 10.7, 10.8) are called ‘knots’ because they define the connection between the basis functions in the parametric space. The knots, concatenated and ordered in ascending order, are called ‘knot vectors’. For the parametric variables u and v, the knot vectors normalized in the range between 0 and 1 can be expressed as



$$ \dot{\boldsymbol{u}}=[0,\dots, 0,\dots, {u}_k,{u}_{k+1},\dots, 1,\dots, 1];k=1,\dots, c+m+2, $$

(10.9)




$$ \dot{\boldsymbol{v}}=\left[0,\dots, 0,\dots, {v}_l,{v}_{l+1},\dots, 1,\dots, 1\right]; l=1,\dots, d+ n+2. $$

(10.10)

The design of the knot vectors (i.e. location and spacing of the knots) has an effect on the shape of the computed surface. The number of repetitions of knots in both ends of vectors 
$$ \dot{\boldsymbol{u}} $$
and 
$$ \dot{\boldsymbol{v}} $$
is referred to as the multiplicity of the knot. By increasing the multiplicity of a knot, the control points on its side will have larger influence on the shape, because more basis functions will be connected at that knot. A multiplicity equal to the degree of the basis functions plus one is needed for the first and last elements in order to allow the surface to pass through the boundary control points. In Fig. 10.4, knot vectors of 
$$ \dot{\boldsymbol{u}}=\left[0,0,0.25,0.5,0.75,1,1\right] $$
, 
$$ \dot{\boldsymbol{u}}=\left[0,0,0,0.33,0.67,1,1,1\right] $$
and 
$$ \dot{\boldsymbol{u}}=\left[0,0,0,0,0.5,1,1,1,1\right] $$
were used for computing the linear, quadratic and cubic basis functions, respectively (the knots were represented as black bars in the u axis).

The algorithm of computing a point 
$$ \widehat{\boldsymbol{S}}\left( u, v\right) $$
on the reconstructed NURBS surface can be summarized as follows:



  • Step 1: Select the control points from the original surface image parametrized with two parametric variables u and v.


  • Step 2: Set the degrees of the surface in u and v directions – i.e. c and d.


  • Step 3: Set the knot vectors in the u and v directions.

For the parametric value u, perform the following steps:



  • Step 4: Find the knot span – i.e. the interval [u k  , u k + 1) in which u lies.


  • Step 5: Compute the basis functions N k − c , c (u) … N k , c (u).

For the parametric value v, perform the following steps:



  • Step 6: Find the knot span – i.e. the interval [v l  , v l + 1) in which v lies.


  • Step 7: Compute the basis functions N l − d , d (v) … N l , d (v).


  • Step 8: Compute the query point 
$$ \widehat{\boldsymbol{S}}\left( u, v\right) $$
using Eq. (10.4).

Figure 10.6 shows a reconstructed surface of the nose region of a head phantom surface obtained using NURBS modelling. The control points were derived from an image of the phantom’s surface, as shown in Fig. 10.6b. A smooth surface of the nose region was obtained by using cubic basis functions, as shown in Fig. 10.6c.

A340376_1_En_10_Fig6_HTML.gif


Fig. 10.6
An example for reconstruction of a patient’s surface using NURBS modelling from a surface image. (a) Nose region defined on a head phantom. (b) Control points obtained from the surface image. (c) The reconstructed NURBS surface of the nose region


10.4 Analysis of a Patient’s Surface Using Differential Geometry


The localization of anatomical landmarks on patient surfaces is beneficial for monitoring patient positioning during radiation therapy, particularly in the image registration step. Feature points on the localized landmarks can be used to accelerate the image registration process by dividing it into two steps: coarse registration and fine registration. During the coarse registration, the feature points are used to identify initial transformation parameters in a short time, as they constitute a subset of the original point distribution. Then, the fine registration boosts the estimation of the transformation parameters by further minimizing the error function (Placht et al. 2012).

One of the intuitive approaches for the localization of anatomical landmarks in a surface image is to compute differential geometry (curvature) features. The mathematical field of differential geometry includes theories for analysing the geometrical characteristics of a surface in a 3D space (Pressley 2010). Differential geometry uses differential vectors of surface points to identify and analyse the surface characteristics. However, this method of analysis is limited by its sensitivity to noise in the surface (Agam and Tang 2005). As a result, this sensitivity affects the accuracy of the estimated features and, consequently, the stability of the localized points. Aside from the use of appropriate spatial and filtering techniques, NURBS surface reconstruction is expected to improve the outcome of such analysis (Soufi et al. 2016).

In order to localize of anatomical landmarks, we focus on the property of the surface curvature. The assumption here is that anatomical landmarks on the surface of patient’s body – especially the surface of the head – have distinctly curved shapes. For example, Fig. 10.7 shows anatomical landmarks at the nose region of a head phantom. Apparently, the apex and alae of the nose have convex shapes, whereas the nasolabial and nasofacial sulci have concave shapes.

A340376_1_En_10_Fig7_HTML.gif


Fig. 10.7
Anatomical landmarks of the nose region with corresponding curvature types

We introduce the principles of analysing the patient’s surface based on its curvature by first defining the curvature of a curve. Next, we explain the concept of the curvature of a surface, which will help us to measure the local shape of surface regions analytically and localize the feature points.


10.4.1 Curvature of a Curve


The curvature of curve can be obtained by studying the changes in a position vector representing a point’s position on the parameterized curve. Suppose that c(s) is a parameterized regular curve (i.e. the curve is differentiable, and 
$$ \left\Vert \dot{\boldsymbol{c}}(s)\right\Vert \ne 0 $$
at all the points, where 
$$ \dot{\boldsymbol{c}}(s) $$
indicates the first derivative (velocity) of c(s)). The parameter 
$$ s\mathbb{\in}\mathbb{R} $$
is called arc length and indicates the length of the curve’s segment measured between two points of it. Then, the curvature of c(s) can be calculated as



$$ \kappa =\frac{\left\Vert \ddot{\boldsymbol{c}}(s)\times \dot{\boldsymbol{c}}(s)\right\Vert }{{\left\Vert \dot{\boldsymbol{c}}(s)\right\Vert}^3}, $$

(10.11)
where ‘×’ denotes the outer product operator and 
$$ \ddot{\boldsymbol{c}}(s) $$
indicates the second derivative (acceleration) of c(s).

Now, assume that a short segment of c(s) can be approximated to an arc of a circle, as shown in Fig. 10.8, and we want to calculate its curvature. In this case, c(s) can have the following parameterization at that segment:



$$ \boldsymbol{c}(s)=\left({x}_0+ r \cos \left(\theta \right),{y}_0+ r \sin \left(\theta \right)\right). $$

(10.12)
where m(x 0, y 0) is the centre of the circle, r is its radius and θ is the central angle of the arc in Radians. By using the arc-length relationship (s = θr), Equation (10.12) can be rewritten as



$$ \boldsymbol{c}(s)=\left({x}_0+ r \cos \left(\frac{s}{r}\right),{y}_0+ r \sin \left(\frac{s}{r}\right)\right). $$

(10.13)
Thus, the first and second derivatives can be derived as shown in Eqs. (10.14) and (10.15), respectively:



$$ \dot{\boldsymbol{c}}(s)=\left(- \sin \left(\frac{s}{r}\right), \cos \left(\frac{s}{r}\right)\right), $$

(10.14)




$$ \ddot{\boldsymbol{c}}(s)=\left(-\frac{1}{r} \cos \left(\frac{s}{r}\right),-\frac{1}{r} \sin \left(\frac{s}{r}\right)\right). $$

(10.15)
By using the cross product formula:



$$ \boldsymbol{a}\times \boldsymbol{b}=\parallel \boldsymbol{a}\parallel \parallel \boldsymbol{b}\parallel \sin (\alpha )\boldsymbol{n}, $$

(10.16)
where α is the angle between a and b (α is a right angle because 
$$ \dot{\boldsymbol{c}}(s) $$
and 
$$ \ddot{\boldsymbol{c}}(s) $$
are perpendicular) and n is a unit normal vector to the plane spanned by 
$$ \dot{\boldsymbol{c}}(s) $$
and 
$$ \ddot{\boldsymbol{c}}(s) $$
; the curvature of c(s) can be calculated as



$$ \begin{array}{c}\kern1.00em \kappa =\frac{\sqrt{{(-\frac{1}{r} \cos (\frac{s}{r}))}^2+{(-\frac{1}{r} \sin (\frac{s}{r}))}^2}\sqrt{{( \cos (\frac{s}{r}))}^2+{( \sin (\frac{s}{r}))}^2}}{{(\sqrt{{( \cos (\frac{s}{r}))}^2+{( \sin (\frac{s}{r}))}^2})}^3},\kern1.00em \\ {}\kern1.00em \kappa =\frac{1}{r}.\kern1.00em \end{array} $$

(10.17)

Jul 3, 2017 | Posted by in GENERAL RADIOLOGY | Comments Off on Surface-Imaging-Based Patient Positioning in Radiation Therapy

Full access? Get Clinical Tree

Get Clinical Tree app for offline access