Based Tensor Level Set Framework for Vertebral Body Segmentation



Fig. 1
The region of interest in our experiment: a A clinical CT slice of a human vertebra. b The blue color shows the VB region



A323246_1_En_4_Fig2_HTML.gif


Fig. 2
An example of the initial labeling. a Original CT image, b detection of the VB region using MF, c the initial labeling, f* using tensor level set segmentation and d the SDF of the initial segmentation (f*) which is used in the registration phase. Red color shows the zero level contour


A323246_1_En_4_Fig3_HTML.gif


Fig. 3
Our proposed shape-based segmentation. Our framework consists of two main stages; the training phase and the segmentation phase




2 Methods


Intensity based model may not be enough to obtain the optimum segmentation. Hence, we propose a new shape based segmentation method. This method has several steps. As a pre-processing step, we extract the human spine area using the Matched filter (MF) adopted in [6]. As shown in Fig. 2a, b, the MF is employed to detect the VB automatically. This process helps to roughly remove the spinous processes and pedicles. Additionally, it eliminates the user interaction. We tested the Matched filter using 3,000 clinical CT images. The VB detection accuracy is $$97.6\,\%$$. In the second phase, we obtain initial labeling (f*) using the region-based tensor level set model, as described in [14]. Finally, we register the initial labeled image and the shape priors to obtain the optimum labeling, as in [1]. To obtain the shape priors (p), we use the 2D-PCA on all training images. Figure 3 summarizes the main components of our framework. The following sections give more details about the shape model construction and the segmentation method.


2.1 Shape Model Construction


In this work, we describe the shape representation using the SDF, as in [2]. The objective of this step is to obtain the most important information of training images using 2D-PCA. As op-posed to conventional PCA, 2D-PCA is based on 2D matrix rather than 1D vector. This means that, the image does not need to be pre-transformed into a vector. In addition, the image covariance matrix(G) can be directly constructed using the original image matrices. As a result, 2D-PCA has two important advantages over PCA. First, it is easier to evaluate G accurately since its size using 2D-PCA is much smaller. Second, less time is required to determine the corresponding eigenvectors [15]. 2D-PCA projects an image matrix X, which is an mn matrix onto a vector, b, which is an n1 vector, by the linear transformation. The resultant projection coefficient vector y will be:


$$\begin{aligned} {\mathbf y}{\mathbf =}{\mathbf Xb}.\mathrm{\ } \end{aligned}$$

(1)
Suppose that there are M training images, the ith training image is denoted by $${{\mathbf X}}_i,({i}=1,2,\ldots ,{M})$$ and the average image of all training samples is denoted by $$\overline{{\mathbf X}}{\mathbf =}\frac{1}{M}\sum ^M_{i=1}{{{\mathbf X}}_i}$$. Then, let us define the image covariance matrix G, as in [15]:


$$\begin{aligned} {\mathbf G}{\mathbf =}\frac{1}{M}\sum ^M_{i=1}{({{\mathbf X}}_i-\overline{{\mathbf X}})^t}\left( {{\mathbf X}}_i-\overline{{\mathbf X}}\right) . \end{aligned}$$

(2)
It is clear that, the matrix G is $${n}\times {n}$$ nonnegative definite matrix. Similar to PCA, the goal of 2D-PCA is to find a projection axis that maximizes$${{\mathbf \ }{\mathbf b}}^{{\mathbf t}}{\mathbf Gb}$$. The optimal K projection axes b $${}_{k}$$, where k =1, 2, …, K, that maximize the above criterion are the eigenvectors of G corresponding to the largest K eigenvalues. For an image X, we can use its reconstruction$$\widetilde{{\mathbf \ }{\mathbf X}}$$ defined below to approximate it.


$$\begin{aligned} \widetilde{{\mathbf X}}=\mathrm{\ }\overline{{\mathbf X}}+\int \limits ^K_{k=1}{{{\mathbf y}}_k{{{\mathbf b}}_{{\mathbf k}}}^{{\mathbf t}}}, \end{aligned}$$

(3)
where $${{\mathbf y}}_k=\left( {\mathbf X}{\mathbf -}\overline{{\mathbf X}}\right) {{\mathbf b}}_{{\mathbf k}}$$ is called the $$k{\text {th}}$$ principal component vector of the sample image X. The principal component vectors obtained are used to form an $${m}\times {K}$$ matrix Y = [y $$_{1}$$,y $$_{2}$$,…,y $$_{K}$$] and let B = [b $$_{1}$$,b $$_{2}$$,…,b $$_{K}$$], then we can rewrite Eq. 3 as:


$$\begin{aligned} \widetilde{{\mathbf X}}=\mathrm{\ }\overline{{\mathbf X}}+{\mathbf Y}{{\mathbf B}}^{{\mathbf t}}. \end{aligned}$$

(4)
However, one disadvantage of 2D-PCA (compared to PCA) is that more coefficients are needed to represent an image. From Eq. 4, it is clear that dimension of the 2D-PCA principal component matrix Y (m $$\times $$ K) is always much higher than PCA. To reduce the dimension of matrix Y, the conventional PCA is used for further dimensional reduction after 2D-PCA. More details will be discussed in the following section.

Now, let the training set consists of M training images {I$${}_{1}$$,…, I$$_{M}$$}; with SDFs $$\{\varPhi _{1}\mathrm{\ },\ldots ,\mathrm{\ }\varPhi _{M}\}$$. All images are binary, pre-aligned, and normalized to the same resolution. As in [2], we obtain the mean level set function of the training shapes, $$\mathrm{\ }\overline{\varPhi }$$, as the average of these M signed distance functions. To extract the shape variabilities, $$\mathrm{\ }\overline{\varPhi }$$ is subtracted from each of the training SDFs. The obtained mean-offset functions can be represented as {$${\widehat{\varPhi }}_1\mathrm{,\ldots ,}{\widehat{\varPhi }}_M$$}. These new functions are used to measure the variabilities of the training images. We use 80 training VB images with $$120 \times 120$$ pixels in our experiment. According (2), the constructed matrix G will be:


$$\begin{aligned} {\mathbf G}{\mathbf =}\frac{1}{M}\sum ^{M=80}_{i=1}{{{\widehat{\varPhi }}_i}^t}{\widehat{\varPhi }}_i. \end{aligned}$$

(5)
Experimentally, we find that, the minimum suitable value is $$K=10$$. Less than this value, the accuracy of our segmentation algorithm falls drastically. After choosing the eigenvectors corresponding to 10 largest eigenvalues, b $${}_{1}$$, b $$_{2},\ldots ,$$ b $${}_{10}$$, we obtained the principal component matrix Y $$_{i}$$($$m=120 \times K=10$$) for each SDF of our training set ($$i=1,2,\ldots ,80$$). For more dimensional reduction, the conventional PCA is applied on the principal components {$${\mathop {{\mathbf Y}}\limits ^{\rightharpoonup }}_{{\mathbf 1}}$$,…, $${\mathop {{\mathbf Y}}\limits ^{\rightharpoonup }}_{{\mathbf M}}$$}. It should be noted that, $$\mathop {{\mathbf Y}}\limits ^{\rightharpoonup }$$ is the vector representation of Y. The reconstructed components (after retransforming to matrix representation) will be:


$$\begin{aligned} {\widetilde{{\mathbf Y}}}_{\{{\mathbf l},{\mathbf h}\}}{\mathbf =}{\mathbf U}{{\mathbf e}}_{\{{\mathbf l},{\mathbf h}\}}, \end{aligned}$$

(6)
where U is the matrix which contains L eigenvectors corresponding to L largest eigenvalues $$\lambda _{l}$$, ($$l =1,2,\ldots , L$$), and $${{\mathbf e}}_{\{l,h\}}$$ is the set of model parameters which can be described as


$$\begin{aligned} {{\mathbf e}}_{\{l,h\}}=h\sqrt{{\lambda }_l}, \end{aligned}$$

(7)
where $$l=1,\ldots ,L, h=\{-,\ldots ,\},$$ and is a constant which can be chosen arbitrarily (in our experiments, we chose $$L= 4,=3$$). The new principal components of training SDFs are represented as {$${\widetilde{{\mathbf Y}}}_{{\mathbf 1}}$$,…,$${\mathbf \ }{\widetilde{{\mathbf Y}}}_{{\mathbf N}}$$} instead of {$${{\mathbf Y}}_{{\mathbf 1}}$$,…,$${\mathbf \ }{{\mathbf Y}}_{{\mathbf M}}$$} where N is the multiplication of L and standard deviation in eigenvalues (the number of elements in h), i.e. $$N=L(2+1)$$. Given the set {$${\widetilde{{\mathbf Y}}}_{{\mathbf 1}}$$,…,$${\mathbf \ }{\widetilde{{\mathbf Y}}}_{{\mathbf N}}$$},the new projected training SDFs are obtained as follows:


$$\begin{aligned} {\widetilde{\varPhi }}_j{\mathbf =}\overline{\varPhi }+{\widetilde{{\mathbf Y}}}_j{{\mathbf B}}^{{\mathbf t}},\quad j=1, 2,\ldots , \textit{N.} \end{aligned}$$

(8)
Finally, the shape model is required to capture the variations in the training set. This model is considered to be a weighted sum of the projected SDFs (Eq. 8

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Mar 17, 2016 | Posted by in COMPUTERIZED TOMOGRAPHY | Comments Off on Based Tensor Level Set Framework for Vertebral Body Segmentation

Full access? Get Clinical Tree

Get Clinical Tree app for offline access