Propagation, Level Set Methods and Grouping

× 
$$ \mathcal{R} $$
defined at a plane Ω. The most general form of the snake model consists of:




$$ E(\Gamma ) = \smallint _0^{^1}(\alpha {E_{int}}(\Gamma (p)) + \,\beta {E_{img}}({\mathcal{I}}(\Gamma (p))) + \gamma {E_{ext}}(\Gamma (p)))dp $$

(1)
where 
$$ \mathcal{I} $$
is the input image, E int [=w1 |Γ′| + w 2 |Γ |] imposes smoothness constraints (smooth derivatives), E img [= − |
$$ \mathcal{I} $$
|] makes the curve to be attracted from the image features (strong edges), E ext encodes either user interaction or prior knowledge and α, β, γ are coefficients that balance the importance of these terms.

The calculus of variations can be used to optimise such a cost function. To this end, a certain number of control points are selected along the curve, and the their positions are updated according to the partial differential equation that is recovered through the derivation of E(Γ) at a given control point of Γ. In the most general case a flow of the following nature is recovered:



$$ \Gamma (p;\tau ) = \underbrace {(\alpha {F_{gm}}(\Gamma ) + \beta {F_{img}}({\mathcal{I}}) + \gamma {F_{pr}}(\Gamma )){\mathcal{N}}}_F $$

(2)
where 
$$ \mathcal{N} $$
is the inward normal and F gm depends on the spatial derivatives of the curve, the curvature, etc. On the other hand, F img is the force that connects the propagation with the image domain and F pr (Γ) is a speed term that compares the evolving curve with a prior and enforces similarity with such a prior. The tangential component of this flow has been omitted since it affects the internal position of the control points and doesn’t change the form of the curve itself.

Such an approach refers to numerous limitations. The number and the sampling rule used to determined the position of the control points can affect the final segmentation result. The estimation of the internal geometric properties of the curve is also problematic and depends on the sampling rule. Control points move according to different speed functions and therefore a frequent re-parameterisation of the contour is required. Last, but no least the evolving contour cannot change the topology and one cannot have objects that consist of multiple components that are not connected.


2.1 Level Set Method


The level set method was first introduced in [10] and re-invented in [24] to track moving interfaces in the community of fluid dynamics and then emerged in computer vision [3, 21]. The central idea behind these methods is to represent the (closed) evolving curve Γ with an implicit function ϕ that has been constructed as follows:



$$ \phi (s)=\left\{\begin{array}{c}0, s\in \varGamma \\ {}-\upepsilon, s\in {\varGamma}_{in}\\ {}+\upepsilon, s\in {\varGamma}_{out}\end{array}\right. $$
where epsilon is a positive constant, Γ in the area inside the curve and Γ out the area outside the curve as shown in [Fig. (1)]. Given the partial differential equation that dictates the deformation of Γ one now can derive the one for ϕ using the chain rule according to the following manner:



$$ \frac{\partial }{\partial \tau}\phi \left(\varGamma \left( p;\tau \right)\right)=\frac{\phi \left(\varGamma \left( p;\tau \right)\right)}{\partial \varGamma}\underset{F\mathcal{N}}{\underbrace{\frac{\partial \varGamma \left( p;\tau \right)\Big)}{\partial \tau}}}+\frac{\partial \phi}{\partial \tau}= F\left(\nabla \phi \cdot \mathcal{N}\right)+{\phi}_{\tau}=0 $$

(3)


A151032_1_En_3_Fig1_HTML.gif


Fig. 1
Level set method and tracking moving interfaces; the construction of the (implicit) ϕ function [figure is courtesy of S. Osher]

Let us consider the arc-length parameterisation of the curve Γ(c). The values of ϕ along the curve are 0 and therefore taking the derivative of ϕ along the curve Γ will lead to the following conditions:



$$ \frac{\partial \phi \left(\varGamma (c)\right)}{\partial c}=0\to \frac{\partial \phi}{\partial \varGamma}\left(\varGamma (c)\right)\cdot \frac{\partial \varGamma}{\partial c}=0\to \nabla \phi (c)\cdot \mathcal{T}(c)=0 $$

(4)
where 
$$ \mathcal{T} $$
(c) is the tangential vector to the contour. Therefore one can conclude that ∇ϕ is orthogonal to the contour and can be used (upon normalisation) to replace the inward normal 
$$ \left[\mathcal{N}=-\frac{\nabla \phi}{\left|\nabla \phi \right|}\right] $$
leading to the following condition on the deformation of ϕ:



$$ - F\left|\phi \right|+{\phi}_{\tau}=0\to {\phi}_{\tau}= F\left|\phi \right| $$

(5)
Such a flow establishes a connection between the family of curves Γ that have been propagated according to the original flow and the ones recovered through the propagation of the implicit function ϕ. The resulting flow is parameter free, intrinsic, implicit and can change the topology of the evolving curve under certain smoothness assumptions on the speed function F. Last, but not least, the geometric properties of the curve like its normal and the curvature can also be determined from the level set function [24]. One can see a demonstration of such a flow in [Fig. (2)].

A151032_1_En_3_Fig2_HTML.gif


Fig. 2
Demonstration of curve propagation with the level set method; handling of topological changes is clearly illustrated through various initialization configurations (a,b,c)

In practice, given a flow and an initial curve the level set function is constructed and updated according to the corresponding motion equation in all pixels of the image domain. In order to recover the actual position of the curve, the marching cubes algorithm [20] can be used that is seeking for zero-crossings. One should pay attention on the numerical implementation of such a method, in particular on the estimation of the first and second order derivatives of ϕ, where the ENO schema [24] is the one to be considered. One can refer to [36] for a comprehensive survey of the numerical approximation techniques.

In order to decrease computational complexity that is inherited through the deformation of the level set function in the image domain, the narrow band algorithm [7] was proposed. The central idea is update the level set function only within the evolving vicinity of the actual position of the curve. The fast marching algorithm [35, 40] is an alternative technique that can be used to evolve curves in one direction with known speed function. One can refer to earlier contribution in this book [Chap. 7] for a comprehensive presentation of this algorithm and its applications. Last, but not least semi-implicit formulations of the flow that guides the evolution of ϕ were proposed [12, 42] namely the additive operator splitting. Such an approach refers to a stable and fast evolution using a notable time step under certain conditions.


2.2 Optimisation and Level Set Methods


The implementation of curve propagation flows was the first attempt to use the level set method in computer vision. Geometric flows or flows recovered through the optimisation of snake-driven objective functions were considered in their implicit nature. Despite the numerous advantages of the level set variant of these flows, their added value can be seen as a better numerical implementation tool since the definition of the cost function or the original geometric flow is the core part of the solution. If such a flow or function does not address the desired properties of the problem to be solved, its level set variant will fail. Therefore, a natural step forward for these methods was their consideration in the form of an optimisation space.

Such a framework was derived through the definition of simple indicator functions as proposed in [44] with the following behaviour



$$ \delta \left(\phi \right)=\left\{\begin{array}{l}~0,\kern0.36em \phi \ne 0\\ {}\kern0.36em 1,\kern0.36em \phi =0\end{array}\right.,\quad\mathrm{\mathcal{H}}\left(\phi \right)=\left\{\begin{array}{l}~1,\kern0.36em \phi >0\\ {}\kern0.36em 0,\kern0.36em \phi =0\\ {}\kern0.36em 0,\kern0.48em \phi <0\end{array}\right. $$

(6)
Once such indicator functions have been defined, an evolving interface Γ can be considered directly on the level set space as



$$ \varGamma =\left\{{\boldsymbol{s}}\in \Omega :\delta \left(\phi \right)=1\right\} $$

(7)
while one can define a dual image partition using the 
$$ \mathcal{H} $$
indicator functions as:





$$ \begin{array}{l}{~\varGamma}_{in}=\left\{\mathit{{s}}\in \Omega :\mathrm{\mathcal{H}}\left(-\phi \right)=1\right\}\\ {}{\varGamma}_{out}=\left\{\mathit{{s}}\in \Omega :\mathrm{\mathcal{H}}\left(-\phi \right)=0\right\},\kern0.84em {\varGamma}_{in}\cup {\varGamma}_{out}=\varOmega \end{array} $$

(8)
Towards continuous behaviour of the indicator function [
$$ \mathcal{H} $$
] , as well as well-defined derivatives [δ] in the entire domain a more appropriate selection was proposed in [44], namely the Dirac and the Heaviside distribution:



$$ {\delta}_{\alpha}\left(\phi \right)=\kern0.6em \left\{\begin{array}{l}0\hskip75pt, \left|\phi \right|>\alpha \hfill \\ \frac{1}{2\alpha}\left(1+ \cos \left(\frac{\uppi \phi}{\alpha}\right)\right),\hfill\left|\phi \right|<\alpha \hfill \end{array}\right. $$

(9)





$$ {\mathrm{\mathcal{H}}}_{\alpha}\left(\phi \right)=\left\{\begin{array}{l} \begin{array}{l}1\hskip95pt,\phi >\alpha \\ 0\hskip95pt,\phi <\hbox{--} \alpha \end{array}\\ \frac{1}{2}\left(1+\frac{\phi}{\alpha}+\frac{1}{\uppi} \sin \left(\frac{\uppi \phi}{\alpha}\right)\right),\hfill\left|\phi \right|<\alpha \end{array}\right. $$
Such an indicator function has smooth, continuous derivatives and the following nice property:



$$ \frac{\partial }{\partial \phi}{\mathrm{\mathcal{H}}}_{\alpha}\left(\phi \right)={\delta}_{\alpha}\left(\phi \right) $$
Last, but not least one consider the implicit function ϕ to be a signed distance transform D(s, Γ),



$$ \phi (s)=\left\{\begin{array}{ll}0,&\mathit{{s}}\in \varGamma \\ {} D\left(s,\varGamma \right),&\mathit{{s}}\in {\varGamma}_{in}\\ {}- D\left(s,\varGamma \right),&{{s}}\in \varOmega -{\varGamma}_{in}={\varGamma}_{out}\end{array}\right. $$

(10)
Such a selection is continuous and supports gradient descent minimisation techniques. On the other hand it has to be maintained, and therefore frequent re-initialisations using either the fast marching method [35] or PDE-based approaches [38] were considered. In [13] the problem was studied from a different perspective. The central idea was to derive the same speed function for all level lines – the one of the zero level set – an approach that will preserve the distance function constraint.



3 Data-driven Segmentation


The first attempt to address such task was made in [21] where a geometric flow was proposed to image segmentation. Such a flow was implemented in the level set space and aimed to evolve an initial curve towards strong edges constrained by the curvature effect. Within the last decade numerous advanced techniques have taken advantage of the level set method for object extraction.


3.1 Boundary-based Segmentation


The geodesic active contour model [4, 17] – a notable scientific contribution in the domain – consists of



$$ E\left(\varGamma \right)={\int}_0^{{}^1} g\;\left(\left|\nabla {\mathrm{\mathcal{I}}}_{\sigma}\left(\varGamma (p)\right)\right|\;\right)\;\left|{\varGamma}^{\prime }(p)\right| dp$$

(11)
where 
$$ {{\mathcal{I}}_{\sigma} } $$
is the output of a convolution between the input image and a Gaussian kernel and 
$$ g $$
is a decreasing function of monotonic nature. Such a cost function seeks a minimal length geodesic curve that is attracted to the desired image features, and is equivalent with the original snake model once the second order smoothness component was removed. In [4] a gradient descent method was used to evolve an initial curve towards the lowest potential of this cost function and then was implemented using the level set method.

A more elegant approach is to consider the level set variant objective function of the geodesic active contour;



$$ E\left(\phi \right)={\displaystyle {\iint}_{\varOmega}{\delta}_{\alpha}\left(\phi \left(\omega \right)\right) g\left(\left|\nabla {\mathrm{\mathcal{I}}}_{\sigma}\left(\omega \right)\right|\right)\left|\nabla \phi \left(\omega \right)\right| d\omega} $$

(12)
where Γ is now represented in an implicit fashion with the zero-level set of ϕ. One can take take the derivative of such a cost function according to ϕ:



$$ {\phi _{_{\tau} }} = {\delta _{\alpha} }(\phi ) \mbox{ div } \left({g(;)\frac{{\nabla \phi }}{{\left| {\nabla \phi } \right|}}} \right) $$

(13)
where ω and | 
$$ \mathcal{I} $$
σ (ω)| were omitted from the notation. Such a flow aims to shrink an initial curve towards strong edges. While the strength of image gradient is a solid indicator of object boundaries, initial conditions on the position of the curve can be issue. Knowing the direction of the propagation is a first drawback (the curve has either to shrink or expand), while having the initial curve either interior to the objects or exterior is the second limitation. Numerous provisions were proposed to address these limitations, some of them aimed to modify the boundary attraction term [29], while most of them on introducing global regional terms [45].


3.2 Region-based Segmentation


In [26] the first attempt to integrate edge-driven and region-based partition components in a level set approach was reported, namely the geodesic active region model. Within such an approach, the assumption of knowing the expected intensity properties (supervised segmentation) of the image classes was considered. Without loss of generality, let us assume an image partition in two classes, and let r in (
$$ \mathcal{I} $$
), r out (
$$ \mathcal{I} $$
) be regional descriptors that measure the fit between an observed intensity 
$$ \mathcal{I} $$
and the class interior [r in (
$$ \mathcal{I} $$
)] and exterior to [r out (
$$ \mathcal{I} $$
)] the curve. Under such an assumption one can derive a cost function that separates the image domain into two regions:



  • according to a minimal length geodesic curve attracted by the regions boundaries,


  • according to an optimal fit between the observed image and the expected properties of each class,




$$\begin{aligned} E\left(\phi \right)= w{\displaystyle {\iint}_{\varOmega}{\delta}_{\alpha}\left(\phi \left(\omega \right)\right) g\left(\left|\nabla {\mathrm{\mathcal{I}}}_{\sigma}\left(\omega \right)\right|\right)\kern0.24em \left|\nabla \phi \left(\omega \right)\right| d\omega}\\ +{\displaystyle {\iint}_{\varOmega}{\mathrm{\mathcal{H}}}_{\alpha}}\left(-\phi \left(\omega \right)\right){r}_{in}\left(\mathrm{\mathcal{I}}\right) d\omega +{\displaystyle {\iint}_{\varOmega}\left(1-{\mathrm{\mathcal{H}}}_{\alpha}\left(-\phi \left(\omega \right)\right)\right){r}_{out}\left(\mathrm{\mathcal{I}}\right) d\omega} \end{aligned}$$

(14)
where w is a constant balancing the contributions of the two terms. One can see this framework as an integration of the geodesic active contour model [4] and the region-based growing segmentation approach proposed in [45]. The objective is to recover a minimal length geodesic curve positioned at the object boundaries that creates an image partition that is optimal according to some image descriptors. Taking the partial derivatives with respect to ϕ, one can recover the flow that is to be used towards such an optimal partition:



$$ {\phi _{_{\tau} }} = {\delta _{\alpha} }(\phi)({r_{in}}({\mathcal{I}}) - ({r_{out}}({\mathcal{I}})) + \omega {\delta _{\alpha} }(\phi )\mbox{ div }\left( {g(;)\frac{{\nabla \phi }}{{\left|{\nabla \phi } \right|}}} \right) $$

(15)
where the term δ α (−ϕ) was replaced with δ α (ϕ) since it has a symmetric behaviour. In [26] such descriptor function was considered to be the -log of the intensity conditional density [p in (
$$ \mathcal{I} $$
), p in (
$$ \mathcal{I} $$
)] for each class
Sep 16, 2016 | Posted by in GENERAL RADIOLOGY | Comments Off on Propagation, Level Set Methods and Grouping

Full access? Get Clinical Tree

Get Clinical Tree app for offline access