of Vertebrae on DXA Images Using Constrained Local Models with Random Forest Regression Voting

of a set of $$n$$ landmarks $$l=1,\ldots ,n$$ on each, a statistical shape model is trained by applying principal component analysis (PCA) to the aligned shapes [2]. This yields a linear model of shape variation, which represents the position of each landmark $$l$$ using



$$\begin{aligned} \mathbf {x}_l = T_\mathbf {\theta }(\bar{\mathbf {x}}_l + \mathbf {P}_l\mathbf {b}+ \mathbf {r}_l) \end{aligned}$$

(1)
where $$\bar{\mathbf {x}}_l$$ is the mean position of the point in a suitable reference frame, $$\mathbf {P}_l$$ is a set of modes of variation, $$\mathbf {b}$$ are the shape parameters, $$\mathbf {r}_l$$ allows small deviations from the model, and $$T_\mathbf {\theta }$$ applies a global transformation (e.g. similarity) with parameters $$\mathbf {\theta }$$.

To match the model to a query image, $$\mathbf {I}$$, the overall quality of fit $$Q$$, of the model to the image is optimised over parameters $$\mathbf {p}= \{\mathbf {b},\mathbf {\theta },\mathbf {r}_l\}$$


$$\begin{aligned} Q(\mathbf {p}) = \Sigma _{l=1}^{n} C_l ( T_{\mathbf {\theta }}( \bar{\mathbf {x}}_l + \mathbf {P}_l\mathbf {b}+ \mathbf {r}_l ) ) ~~~\text {s.t.}~~~ \mathbf {b}^T\mathbf {S}_b^{-1}\mathbf {b}\le M_t ~~~\text {and}~~~ |\mathbf {r}_l | < r_t \end{aligned}$$

(2)
where $$C_l$$ is a cost image for the fitting of landmark $$l$$, $$\mathbf {S}_b$$ is the covariance matrix of shape model parameters $$\mathbf {b}$$, $$M_t$$ is a threshold on the Mahalanobis distance, and $$r_l$$ is a threshold on the residuals. $$M_t$$ is chosen using the cumulative distribution function (CDF) of the $$\chi ^2$$ distribution so that 98 % of samples from a multivariate Gaussian of the appropriate dimension would fall within it. This ensures a plausible shape by assuming a flat distribution for model parameters $$\mathbf {b}$$ constrained within hyper-ellipsoidal bounds [2]. In the original work [2], $$C_l$$ was provided by normalised correlation with a globally constrained patch model.

RF Regression Voting in the CLM Framework. In RFRV-CLM, $$C_l$$ in Eq. 2 is provided by voting with a Random-Forest (RF) regressor. To train the RF for a single landmark, the shape model is used to assess the global pose, $$\mathbf {\theta }$$, of the object in each image by minimising $$|T_{\mathbf {\theta }}(\bar{\mathbf {x}})-\mathbf {x}|^2$$. Each image is resampled into a standardised reference frame by applying the inverse of the estimated pose. The model is scaled so that the width of the reference frame of the mean shape is a given value, $$w_{ frame}$$. Sample patches of area $$w_{ patch}^2$$ are then generated from the resampled images at a set of random displacements from the true point positions. The displacements $$\mathbf {d}_j$$ are drawn from a flat distribution in the range $$[-d_{ max},+d_{ max}]$$ in $$x$$ and $$y$$. Finally, image features $$\mathbf {f}_j$$ are extracted from the sample patches. Haar-like features [20] are used, as they have proven effective for a range of applications and can be calculated efficiently from integral images. To allow for inaccurate initial estimates of the pose and to make the detector locally pose-invariant, the process is repeated with random perturbations in scale and orientation of the pose estimate. A RF [1] is then trained, using a standard, greedy approach, with the feature vectors $$\mathbf {f}_j$$ as inputs and the displacements $$\mathbf {d}_j$$ as regression targets. Each tree is trained on a bootstrap sample of $$N_s$$ pairs $$\{(\mathbf {f}_{j},\mathbf {d}_{j})\}$$ from the training data. At each node, a random sub-set of $$n_{feat}$$ features are chosen from this sample, and a feature $$f_i$$ and threshold $$t$$ that best split the data into two compact groups are selected by minimising an entropy measure [3]. Splitting terminates at either a maximum depth, $$D_{ max}$$, or a minimum number of samples, $$N_{ min}$$. The process is repeated to generate a forest of size $$n_{trees}$$.

RFRV-CLM Fitting Fitting to a query image is initialised via an estimate of the pose of the model e.g. from a small number of manual point annotations or a previous model, providing initial estimates $$\mathbf {b}$$ and $$\mathbf {\theta }$$ (see Sect. 3). Equation 2 is then optimised as follows. The image is resampled in the reference frame using the current pose. Cost images $$C_l$$ are then computed by evaluating a grid of points in the resampled images over a region of interest around the current estimate of each point; the grid size is defined by a search range $$[-d_{search}, +d_{search}]$$, and the cost images are calculated for all landmarks independently. At each point $$\mathbf {z}_{l}$$ in the grid, the required feature values are extracted and the RF regressor $$R_l$$ applied. $$R_l$$ then casts a vote into a cost image $$C_l$$ using $$C_l(\mathbf {z}_l+\mathbf {\delta })\rightarrow C_l(\mathbf {z}_l+\mathbf {\delta })+c$$. Each leaf node of the RF contains the mean $$\bar{\mathbf {d}}$$ and covariance $$\mathbf {S}_d$$ of the random displacements $$\mathbf {d}_i$$ from the true point position, in the reference frame, of its training samples. This supports several voting styles ($$c, \delta $$); a single, unit vote at $$\bar{\mathbf {d}}$$, or probabilistic voting by weighting with $$|\mathbf {S}_d|^{-0.5}$$, or by casting a Gaussian spread of votes $$N(\bar{\mathbf {d}}$$, $$\mathbf {S}_d)$$.

The point positions are re-estimated by finding the lowest cost point within a disk of radius $$r$$ of the current position in each cost image, applying the shape model and moving $$\mathbf {b}$$ to nearest valid point on the limiting ellipsoid if the shape constraint in Eq. 2 is violated, updating all point positions using $$\mathbf {x}_l \rightarrow T_{\mathbf {\theta }_{r}}(\bar{\mathbf {x}}_l + \mathbf {P}_l\mathbf {b}+ \mathbf {r}_l)$$, and iterating whilst reducing $$r \rightarrow k_r r$$. The initial disk radius $$r_{ max}$$ was set to the search range $$d_{search}$$, the search was terminated at $$r_{t}=1.5$$ pixels (in the reference image), and $$k_r$$ was set to $$0.7$$. The optimisation is described in full in Algorithm 1.

A331518_1_En_14_Figa_HTML.gif



3 Evaluation


A series of experiments was performed to optimise the various free parameters and options of the RFRV-CLM for application to the task of vertebral localisation in DXA images, and to compare the results to those achieved in [17] using AAMs. To facilitate this comparison, the same dataset and performance metrics were used. The dataset consisted of 320 DXA VFA images scanned on various Hologic (Bedford MA) scanners, obtained from: (a) 44 patients from a previous study [14]; (b) 80 female subjects in an epidemiological study of a UK cohort born in 1946; (c) 196 females attending a local clinic for DXA BMD measurement, and for whom the referring physician had also requested VFA (as approved by the local ethics committee). Manual annotations of 405 landmarks were available for each image, covering the thoracic vertebrae from T7 to T12 and the lumbar vertebrae from L1 to L4. Each of these vertebrae in each image was also classified by an expert radiologist into one of five groups (normal, deformed but not fractured, and grade 1, 2 and 3 fractures according to the Genant definitions [10]; see Fig. 1).

A331518_1_En_14_Fig1_HTML.gif


Fig. 1
Example DXA spinal images. ac 405-point manual annotations. df Automatic annotation of the L2 vertebra (using the L1-L3 model), using the fully optimised, 2-stage RFRV-CLM. Example (a, d) shows grade 2 fractures on L2 and L3, (b, e) show a grade 3 fracture on L1, and (c, f) show a grade 3 fracture on L1 and a grade 1 fracture on L2

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 1, 2016 | Posted by in GENERAL RADIOLOGY | Comments Off on of Vertebrae on DXA Images Using Constrained Local Models with Random Forest Regression Voting

Full access? Get Clinical Tree

Get Clinical Tree app for offline access