Informatics

used to describe the region of interest D consists of four elements, namely, a probabilistic atlas (
$$ \mathcal{P} $$
), texture-based features (
$$ \mathcal{T} $$
), relationships 
$$ (\mathcal{R}) $$
, and specific segmentation methods 
$$ (\mathcal{M}) $$
for organs in the region D, 
$$ \mathcal{K}(D) = \{\mathcal{P}\,,\,\mathcal{T},\,\,\mathcal{R},\,\mathcal{M}\} $$
. We construct a probabilistic atlas 
$$ \mathcal{P} $$
for our data D, which is a map that assigns to each voxel a set of probabilities to belong to each of the organs, 
$$ \mathcal{P} $$
(D) = (p 0, p 1, p 2, . . . , p N ), where p i is the probability for the voxel to belong to organ i, (1 ≤ i ≤ N). All the organs other than those that are manually delineated are assigned the probability p 0 where p 0 = 1–Σ k = 1 N pk. The probabilistic atlas is constructed in the training phase of the frame-work. We initialize the organs present in the atlas during deployment using registration techniques described in [15]. This helps us to gain insight on the spatial information of different organs and variations with respect to one another, and also guides automated segmentation procedures.


Every organ will present different texture-based features depending on the image modality and the tissue type of the organ. We denote the set of texture-based features by, 
$$ \mathcal{T} $$
 = {T 1, T 2, . . . , T N }, where T i  = [f 1 i , f 2 i , …, f k i ] is set of optimal texture-based features selected in the training phase for discrimination for organ i, (1 ≤ i ≤ N) and k is the number of features for organ i. The optimal texture-based features are selected in the training phase of our framework. Texture-based features are used by the image analysis methods to segment the organs and refine the initialization by the probabilistic atlas.

Relationships are a set of rules which describe relationships between two organs in the region. Based on prior anatomical knowledge about the organs, we consider two types of relationships: hierarchical (e.g., child-parent) and spatial (e.g., posterior-anterior). We denote the set of relationships by 
$$ \mathcal{R} $$
 = {R r,t |r, t organs}, where R r,t ∈ {posterior, anterior, right, left, child, parent}, r is the reference organ and t is the target organ. For example, R i,j  = child means that, the reference organ i (e.g., ventricle) is the child of the target organ j (e.g., heart); while R i,j  = right implies that the reference organ i (e.g., right lung) is to the right of the target organ j (e.g., heart).

Image analysis methods are specific methods set during the training phase for a specific organ. We denote the set of all methods by, 
$$ \mathcal{M} $$
 = {M 1, M 2, . . . ,M K }. Due to high anatomic variations and the large amount of structural information found in medical images, global information based segmentation methods yield inadequate results in region extraction. We use the knowledge-based atlas to guide the automatic segmentation process and then use specific image analysis methods for segmentation of anatomical structures. We examine the performance of these methods in the training phase and accordingly select methods with higher accuracy and true positive rate as well as lower false positive rate. One of the segmentation methods used is the multi-class multi-feature fuzzy connectedness method, which is described in the following section.

Multi-class, multi-feature fuzzy connectedness: The anatomical objects in medical data are characterized by certain intensity level and intensity homogeneity features. Also, additional features can be computed to characterize certain properties of a tissue class. Such features can be used to distinguish between different types of tissue classes. Our multi-class, multi-feature fuzzy connectedness method is able to take advantage of multiple features to distinguish between multiple classes for segmentation and classification.

We define three kinds of fuzzy affinities: local fuzzy spel affinity, global object affinity, and global class affinity. The local fuzzy spel affinity (μ κ ) consists of three components: 1. the object feature intensity component (μ ϕ ), 2. the intensity homogeneity component (μ ψ ), and 3. the texture feature component (μ φ). The similarity of the pixels’ feature vectors is computed using the Mahalanobis metric: 
$$ {\displaystyle {m}_{d\left(\mathrm{c}\to \mathrm{d}\right)}^{2}}=\left({\mathrm{X}}_{\left(\mathrm{c}\to \mathrm{d}\right)}-{\overline{\mathrm{X}}}_{\left(\mathrm{c}\to \mathrm{d}\right)}\right) $$
T 
$$ {\displaystyle {\mathrm{S}}_{\left(\mathrm{c}\to \mathrm{d}\right)}^{-1}}\left({\mathrm{X}}_{\left(\mathrm{c}\to \mathrm{d}\right)}-{\overline{\mathrm{X}}}_{\left(\mathrm{c}\to \mathrm{d}\right)}\right), $$
where 
$$ {\mathrm{X}}_{\left(\mathrm{c}\to \mathrm{d}\right)},{\overline{\mathrm{X}}}_{\left(\mathrm{c}\to \mathrm{d}\right)},{\mathrm{S}}_{\left(\mathrm{c}\to \mathrm{d}\right)} $$
are the feature vector, the mean feature vector, and the covariance matrix in the direction from c to d, respectively. The bias in intensity in a specific direction is accounted for by allowing different levels and signs of intensity homogeneities in different directions of adjacency [13]. Thus, this formulation accounts for different levels of the increase or decrease in intensity values in the horizontal (left, right) or vertical (up, down) directions. The advantage of using the Maha-lanobis metric is that it weighs the differences in various feature dimensions by the range of variability in the direction of the feature dimension. These distances are computed in units of standard deviation from the mean. This allows us to assign a statistical probability to the measurement. The local fuzzy spel affinity is computed as: μ κ (c, d) = 
$$ \frac{1}{1+ md\left(\mathrm{c}\to \mathrm{d}\right)} $$
in order to ensure that μ κ (c, d) ∈ Z 2 [0, 1] and it is reflexive and symmetric, where Z 2 is a set of all pixels of a two-dimensional Euclidean space.

Fuzzy connectedness captures the global hanging-togetherness of pixels by using the local affinity relation and by considering all possible paths between two, not necessarily nearby, pixels in the image. It considers the strengths of all possible paths between given two pixels, where the strength of a particular path is the weakest affinity between the successive pairs of pixels along the path. Thus, the strongest connectedness path between the given two pixels specifies the degree of global hanging togetherness between the given two pixels. Global object affinity is the largest of the weakest affinities between the successive pairs of pixels along the path p cd of all possible paths P cd from c to d and is given by μ K (c,d) = max pcd P cd {min1 ≤ i ≤ m[μ κ (c (i), c (i+1))]}.

In our framework, the global object affinity and local pixel affinity are assigned only if the global class affinity (or discrepancy measure) of c and d belonging to the neighboring objects’ classes is more (or less) than a predefined value, γ (note that the affinity value has an inverse relationship with the Mahalanobis distance metric in our formulation). The minimum discrepancy measure J(c,d) = min1≤ i ≤ b m d (c, d), where b is the number of neighboring classes of the target object, gives the maximum membership value of a pixel pair belonging to a certain class. If J(c,d) < γ, and the class to which the pixel pair belongs is not the target object class, then the local pixel affinity μ κ(c,d) is set to zero, else its local pixel affinity is computed as described earlier.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Sep 16, 2016 | Posted by in GENERAL RADIOLOGY | Comments Off on Informatics

Full access? Get Clinical Tree

Get Clinical Tree app for offline access