Carlo Sampling for the Segmentation of Tubular Structures



Fig. 1
The feature space is defined by the cross-section center position x = (x 1, x 2, x 3), the cross-section tangential direction Θ = (θ 1, θ 2, θ 3) and the lumen pixel intensity distribution p vessel






2 Segmentation Model & Theoretical Foundations



2.1 Vessel Model & Particle Filters


To explain our method at a concept level, let us assume that a segment of the vessel has been detected: a 2D shape on a 3D plane. Similar to region growing and front propagation techniques, our method aims to segment the vessel in adjacent planes. To this end, one can consider the hypotheses ω of the vessel being at a certain location (x), having certain orientation (Θ), and referring to certain shape – an elliptic model is a common choice (є) – with certain appearance characteristics (p vessel ).



$$ \underset{\mathit{position}}{\underbrace{\mathrm{x}=\left({x}_1,{x}_2,{x}_3\right)}},\underset{\mathit{orientation}}{\underbrace{\boldsymbol{\varTheta} =\left({\theta}_1,{\theta}_2,{\theta}_3\right)}},\underset{\mathit{shape}}{\underbrace{\upepsilon =\left(\alpha, \beta, \phi \right)}},\kern0.36em \underset{\mathit{appearance}}{\underbrace{\mathbf{p}_{\mathit{vessel}}}} $$

(1)
Then, segmentation consists in finding the optimal parameters of ω given the observed 3D volume. Let us consider a probabilistic interpretation of the problem with π(ω) being the posterior distribution that measures the fitness of the vector ω with the observation. Under the assumption that such a law is present, segmentation consists in finding at each step the set of parameters ω that maximizes π(ω). However, since such a model is unknown, one can assume an autoregressive mechanism that, given prior knowledge, predicts the actual position of the vessel and a sequential estimate of its corresponding states. To this end, we define:



  • a state vector ω composed of x, Θ, є and p vessel (Eq. (1))


  • an iterative process to predict the next state and update the density function, that can be done using a Bayes sequential estimator and is based on the computation of the present state ω t pdf of a system, based on observations from time 1 to time t z 1:t : π(ω t |z 1:t ). Assuming that one has access to the prior pdf π(ω t−1 |z 1:t−1), the posterior pdf π(ω t |z 1:t ) is computed according to the Bayes rule:


    
$$ \uppi \left({\omega}_t\left|{z}_{1: t}\right.\right)=\frac{\uppi \left({z}_t\left|{\omega}_t\right.\right)\uppi \left({\omega}_t\left|{z}_{1: t\hbox{--} 1}\right.\right)}{\uppi \left({z}_t\left|{z}_{1: t\hbox{--} 1}\right.\right)}. $$


  • a distance between prediction and actual observation, based on the observation.

Simple parametric models will be suceptible to fail with vessels’ irregularities (pathologies, prosthesis, …). Therefore instead of optimizing a single state vector, multiple hypotheses are generated and weighted according to actual observation. Nevertheless, in practical cases, it is impossible to compute exactly the posterior pdf π(ω t |z 1:t ). An elegant approach to implement such a technique refers to the use of particle filters where each given hypothesis is a state in the feature space (or particle), and the collection of hypothesis is a sampling of the feature space.

Particle Filters [1, 8] are sequential Monte-Carlo techniques that are used to estimate the Bayesian posterior probability density functions (pdf ) [16, 34]. In terms of a mathematical formulation, such a method approximates the posterior pdf by M random measures {
$$ \omega\begin{array}{l}\!m\\ \!\!t \end{array} $$
, m = 1..M } associated to M weights {
$$ \uplambda \begin{array}{l}\!m\\ \!\!t\end{array} $$
, m = 1..M }, such that



$$ \pi \left({\omega}_t\left|{z}_{1: t}\right.\right)\approx {\displaystyle \sum_{m=1}^M\uplambda}\begin{array}{l}\!m\\ \!\!t\end{array}\delta \left({\omega}_t\hbox{--} \omega \begin{array}{l}\!m\\ \!\!t\end{array}\right), $$

(2)
where each weight 
$$ \uplambda \begin{array}{l}\!m\\ \!\!t\end{array} $$
reflects the importance of the sample 
$$ \omega \begin{array}{l}\!m\\ \!\!t\end{array} $$
in the pdf. The samples 
$$ \omega \begin{array}{l}\!m\\ \!\!t\end{array} $$
are drawn using the principle of Importance Density [9], of pdf 
$$ q\left(\omega \begin{array}{c}\hfill \hfill \\ {}\hfill t\hfill \end{array}\left|{x}_{1: t}^m\right.,{z}_t\right) $$
and it is shown that their weights 
$$ \uplambda \begin{array}{l}\!m\\ \!\!t\end{array} $$
are updated according to



$$ \uplambda \begin{array}{l}\!m\\ \!\!t\end{array}\kern0.5em \propto \uplambda \begin{array}{c}\hfill m\hfill \\ {}\hfill t\mathit{\hbox{-}} 1\hfill \end{array}\ \frac{\uppi \left({z}_t\left|\omega \right.\begin{array}{l}\!m\\ \!\!t\end{array}\right)\uppi \left(\omega \begin{array}{l}\!m\\ \!\!t\end{array}\left|\omega \right.\begin{array}{c}\hfill m\hfill \\ {}\hfill t\mathit{\hbox{-}} 1\hfill \end{array}\right)}{q\left(\omega \begin{array}{l}\!m\\ \!\!t\end{array}{{\left|\omega \right.}^m}_{t\hbox{--} 1},{z}_t\right)}. $$

(3)
Once a set of samples has been drawn, 
$$ \uppi \left(\omega \begin{array}{l}\!m\\ \!\!t\end{array}{{\left|\omega \right.}^m}_{t\hbox{--} 1},{z}_t\right) $$
can be computed out of the observation z t for each sample, and the estimation of the posteriori pdf can be sequentially updated.


2.2 Prediction & Observation: Distance


This theory is now applied to vessel tracking. Each one of the particles 
$$ \omega \begin{array}{l}\!m\\ \!\!t\end{array} $$
represents a hypothetic state of the vessel; a probability measure 
$$ p\left({z}_t\left|\omega \right.\begin{array}{l}\!m\\ \!\!t\end{array}\right) $$
is used to quantify how the image data z t fits the vessel model 
$$ \omega \begin{array}{l}\!m\\ \!\!t\end{array} $$
. To this end, we are using the image terms, and in particular the intensities that do correspond to the vessel in the current cross-section. The vessel’s cross-section is defined by the hypothetic state vector (see Eq. (1)) with a 3D location, a 3D orientation, a lumen’s diameter and a pixel intensity distribution model (the multi-Gaussian). The observed distribution of this set is approximated using a Gaussian mixture model according to the Expectancy-Maximization principle. Each hypothesis is composed by the features given in Eq. (1), therefore, the probability measure is essentially the likelihood of the observation z, given the appearance A model. The following measures (loosely called probabilities) are normalized so that their sum over all particles is equal to one. Assuming statistical independence between shape S and appearance model A, p(z t t ) = p(z t |S)p(z t |A).

Sep 16, 2016 | Posted by in GENERAL RADIOLOGY | Comments Off on Carlo Sampling for the Segmentation of Tubular Structures

Full access? Get Clinical Tree

Get Clinical Tree app for offline access