MRI Preprocessing

in the image by computing a weighted average of surrounding pixels using a robust similarity measure that takes into account the neighboring pixels surrounding the pixel being compared:




$$ NLM\left({x}_i\right)={\displaystyle \sum}_{\forall {x}_j\in \varOmega }w\left({x}_i,{x}_j\right){x}_j $$

(5.1)
where w(x i , x j ) is a weight assigned to value x j representing the similarity between the local patches N i and N j of radius r centered on voxels x i and x j , and represents a local search of x i :



$$ w\left({x}_i,{x}_j\right)=\frac{1}{Z_i}{e}^{\frac{N_i-{N_j}_2^2}{h^2}} $$

(5.2)
where Z i is a normalization constant ensuring that 
$$ {\displaystyle \sum}_{\forall {x}_j\in \varOmega }w\left({x}_i,{x}_j\right)=1 $$
and h acts as a filtering parameter controlling the decay of the exponential function. In Fig. 5.1, an example of the typical NLM filter output is shown demonstrating the excellent noise reduction result.

A338090_1_En_5_Fig1_HTML.gif


Fig. 5.1
Left: original noisy T1-weighted image. Right: denoised image using the NLM filter


Many adaptations and incremental improvements of the NLM filter have been proposed over the last decade. Manjón et al. [21] proposed a multicomponent version of the NLM filter that benefits from the intrinsic multicomponent nature of MR images in a similar manner as is done for RGB images in photography. Modern MR image sequences use parallel imaging to accelerate the image acquisition. As a result, the noise variance is spatially modulated resulting in a spatially varying noise patterns across the images. In 2010, Manjón et al. proposed a spatially adaptive NLM filter that automatically estimated the local noise level. One key advantage of this filter is that it can be applied as a black box to many different types of MR images since it can deal with both stationary and nonstationary noise. In fact, this filter has been extensively used as a part of the well-known VBM toolbox (http://​www.​neuro.​uni-jena.​de/​).

More recently, new versions of NLM method combine it with prefiltering strategies; this has boosted the accuracy of the method. In Manjón et al. [23], a very efficient DCT-based prefilter was used before applying a rotational invariant version of the NLM method. Finally, in 2015, it was proposed to change the DCT-based prefilter with a PCA-based prefilter which significantly improved the denoising performance representing the current state of the art on MR imaging denoising [32]. This filter, named PRI-NLPCA, is also able to deal with spatially varying noise patterns.

Some denoising methods are devoted to other types of MR images, such as those used on diffusion-weighted imaging (DWI). For example, Tristán-Vega and Aja-Fernández [24] proposed a method that used orientation information to filter each DW image using the correlations with images of similar orientations. Sparse reconstruction-based methods have been also proposed to reduce the noise in DW images using dictionary-based approaches [16] or PCA-based decomposition [15, 33].



5.3 Inhomogeneity Correction


MR images are normally affected by signal intensity inhomogeneity which is mainly produced by imperfections in the radio-frequency coils and object-dependent interactions (Sled et al. 1998). Such artifact is perceived as a low-frequency variation of the signal intensity across the image.

Many quantitative methods, such as image registration and segmentation, rely on the assumption that a given tissue is represented by similar voxel intensities throughout the data. Therefore, correction of inhomogeneous data must be performed prior to any quantitative MR analysis. There is a large amount of bibliography dealing with the so-called bias field correction [34].

The common MR imaging signal intensity model including the inhomogeneity effect is a multiplicative model with additive noise:



$$ Y=x\beta +n $$

(5.3)
where Y is the observed voxel intensity, 
$$ \beta $$
is the corresponding value of the bias field supposed to be smooth, x is the true emitted intensity, and n is a Rician distributed additive noise [3538].

There are two main approaches for the inhomogeneity correction: prospective and retrospective strategies. The prospective methods try to avoid this type of artifact during the acquisition process by using special hardware or specific sequences such as the 3D magnetization-prepared rapid acquisition gradient echo (MP2RAGE) sequences. However, these techniques require additional hardware or extension of the acquisition time, which limits their usefulness. Retrospective methods have been more intensively used since they do not require any special acquisition protocol and can be applied offline as a preprocessing step on any analysis pipeline.

There are two main classes of retrospective bias correction methods, those that model the bias field during the segmentation process and those that work directly with image features. Segmentation-based methods estimate the bias field as an image model parameter during the segmentation [3842]. This parameter estimation is usually performed using an expectation-maximization (EM) algorithm [43].

In the SPM2 (statistical parametric mapping, Wellcome Institute, London, United Kingdom) software, Ashburner [44] modeled the bias field by using a combination of discrete cosine transform basis functions whose parameters were adjusted by the minimization of the negative log-likelihood of the log-transformed data (which is equivalent to the image entropy). This technique was further improved on subsequent versions of the software (the current one is SPM12). Although it obtains remarkable results, it is limited to brain imaging.

On the other hand, there are methods which operate directly with image properties and make minimal assumptions about the image characteristics, such as the number of tissues or location, which make them more general. Some of these methods estimate the bias field using a set of low-frequency basis functions and the minimization of some cost function related with the homogeneity of the corrected image [4548].

A338090_1_En_5_Fig2_HTML.gif


Fig. 5.2
Example of inhomogeneity correction process. From left to right: original bias field corrupted image, estimated bias field, inhomogeneity-corrected image, and histogram of the original and corrected images. Note that corrected image has a sharper histogram with well-clustered intensities

However, probably the most used and referenced method is the well-known N3 method (Sled et al. 1998), which despite of its simplicity has gained a huge popularity probably thanks to its robustness. This method estimates the bias field by sharpening the image histogram using a Gaussian deconvolution and smoothing the resulting bias field estimation using B-spline fitting. More recently, an incremental improvement of N3 method called N4 has been proposed which automatically sets some of the method parameters by using a hierarchical optimization scheme [49] (Fig. 5.2).


5.4 Superresolution


In MR imaging, the images are acquired with a specific resolution which is limited by several factors including the signal-to-noise ratio (SNR), dynamic considerations, hardware, time limitations, and patient’s comfort resulting in an insufficient sampling density for certain applications.

In typical clinical settings, several types of images are obtained with different voxel resolutions. Traditionally, in-plane resolution is normally higher than resolution in the slice direction, yielding non-isotropic voxel sizes. In multimodal image applications, such as image segmentation or registration, low-resolution (LR) data has to be upsampled to match a specific voxel size to make it compatible with a higher-resolution (HR) image data [50, 51]. In such cases, interpolation techniques [52, 53] have been traditionally applied. Techniques such as linear interpolation or spline-based methods have been extensively used to increase the apparent data resolution. However, such techniques estimate new points assuming that the existing ones (in the LR image) have the same value in the HR images which is only valid within homogeneous regions. As a result, interpolated images are typically blurred versions of the underlying HR images.

A better approach to effectively increase the resolution of an LR dataset is to use superresolution techniques [54]. Superresolution is a term used to refer to the process to infer a HR image from one or several LR images.

Specifically, in MR, image voxels in LR data y can be related to the corresponding underlying HR voxels x through a simple degradation model:



$$ Y=DHx+n $$

(5.4)
where D is a decimation operator (defined as taking each Lth value starting from zero in each dimension), H is the convolution matrix, x is the underlying HR data, and n is a Rician distributed random noise [55]. In MRI, H can be roughly approximated by a 3D boxcar or Gaussian functions representing the point spread function (PSF) of the acquisition system.

Therefore, the value y j of any voxel in the LR image can be expressed as follows:



$$ {y}_j=\frac{1}{N}{\displaystyle \sum}_{i=1}^N{x}_i+n $$

(5.5)
where the value of the LR voxel y j is the average of the corresponding N x i voxels in the subjacent HR image (assuming equal weights for all HR voxels) plus some noise from the measurement process.

Given this MR image formation model, the aim of any superresolution method is to find the HR x i values from the LR y j values. This is a very ill-posed problem since there are infinite x i values that meet such condition. A common approach to solve this problem is to minimize a merit function such as:



$$ \overset{\wedge }{x}= argmin\kern0.1em y-DH{x}^2 $$

(5.6)
Due to the nonuniqueness of the solution for this problem, extra information is needed to constrain the possible solutions to obtain plausible results. One commonly used approach is to apply smoothness constraints in the reconstruction process that are based on the assumption of smoothness of the reconstructed data:



$$ \overset{\wedge }{x}=\mathrm{argmin}\left(y-DH{x}^2+\lambda R(x)\right) $$

(5.7)
where R(x) is a regularization term and 
$$ \lambda $$
is a weight that balances the contribution of smoothness and data fidelity terms. However, such smoothness assumption penalizes high-frequency content of the reconstructed image that is precisely what we want to obtain. Current superresolution methods use R(x) terms enforcing regularity rather than smoothness.

Superresolution techniques have been previously applied to increase image resolution in functional MR (fMR) imaging [56] and diffusion tensor imaging (DTI) studies [57]. However, most of such techniques are based on the acquisition of multiple LR images (typically orthogonal) with small shifts, a process which is time consuming and therefore not adequate for typical clinical settings. In contrast, single-image superresolution techniques do not increase acquisition time and can be applied to any dataset at the preprocessing stage of any image analysis pipeline.

An example of the single-image superresolution technique was proposed by Manjón et al. [22] where nonlocal pattern redundancy and inter-scale constraints were used to restrict the solution space for this otherwise very ill-posed problem. In this method, a nonlocal means filter is used to enforce the regularity of the image, while a mean constraint assures the inter-scale image fidelity term. Another interesting approach was also proposed by Manjón et al. [28] which benefits from the fact that in clinical settings, both LR and HR images of the patient are acquired in the same session. As a result, acquired HR images can be used to reconstruct the LR images from the same patient.

A similar approach was used in diffusion-weighted imaging to increase the resolution of the DW images [58]. In this case, the B0 image was upsampled using the nonlocal upsampling method [22], and this upsampled version (which has a higher SNR) was used to increase the resolution of the different gradient images. In Fig. 5.3, an example of the results obtained with this technique is shown.

A338090_1_En_5_Fig3_HTML.gif


Fig. 5.3
Left: gold standard 1.2 mm3 resolution color-coded map. Center: result using CLASR to reconstruct at 0.6 mm3 (factor 2). Right: result using CLASR to reconstruct at 0.4 mm3 (upsampling factor 3)


5.5 Registration


Image registration is the process of mapping different images into the same coordinate system so equivalent points of the different images share the same location in a common geometric space. Registration is necessary in order to be able to compare or integrate data from multiple sources (multimodal imaging) or to apply some analysis pipeline (e.g., segmentation).

Registration process first estimates the transformation parameters needed to map the different images. After this, the estimated transformation is applied to the moving image/s to locate them in the reference image space. Depending on the complexity of the transformation, we can have linear or nonlinear registrations. In linear registration, the same transformation is applied to every voxel in the moving image/s (specified in an affine transformation matrix encoding translations, rotations, scaling, and shears), while in nonlinear registration, different voxels may have a different transformation (specified in the so-called deformation fields).

The geometric transformation mapping from the source space to the target space is usually estimated through an optimization process. Such a process requires the use of a similarity measure to evaluate the goodness of the current transformation. Some similarity measures (Fig. 5.4) normally used in MR imaging registration are mean-squared differences (MSD), correlation coefficient (CC), or normalized mutual information (NMI). In order to reduce the computational burden of this complex estimation problem, gradient descent techniques are normally used. Recently, the use of graphical processing units (GPU) has also reduced considerably the registration time by using massively parallel approaches [59].
Oct 18, 2017 | Posted by in GENERAL RADIOLOGY | Comments Off on MRI Preprocessing

Full access? Get Clinical Tree

Get Clinical Tree app for offline access