Image Analysis




© The Author(s) 2014
Yves Sucaet and Wim WaelputDigital PathologySpringerBriefs in Computer Science10.1007/978-3-319-08780-1_4


4. Image Analysis



Yves Sucaet  and Wim Waelput 


(1)
Pathomation, Berchem, Belgium

 



 

Yves Sucaet (Corresponding author)



 

Wim Waelput



Abstract

In the field of digital pathology, image analysis refers to the computer-aided diagnostic assessment of whole slide images (WSIs). While image analysis is clearly another application of WSI, we feel that the subject has become vast enough to warrant its own chapter. The potential of digital pathology has taken another giant step with the emergence of computer-assisted WSI analysis. To overcome challenges related to optimizing speed and accuracy, numerous statistical manipulations and algorithms have been generated adapted, and adopted to enhance the detection, quantification, and characterization of pathology. In this chapter, both the histories and current state of digital pathology and WSI analysis are reviewed, as well as the challenges that remain to optimize their use. It is clear that the potential of digital pathology is almost boundless, but that much work remains to be done.


Keywords
Digital pathologyImage analysisHistological analysisObject recognition



4.1 Current Technology and Challenges


The traditional histological/pathological model, which continues to be used as the gold standard, consists of a single operator, typically a pathologist, visually examining a slide looking for specific tissue characteristics, such as atypical cells or nuclei, the presence of inflammatory cells, cell invasion across tissue barriers, or evidence of tissue necrosis. This process is aided by the addition of various stains but is restricted by the lack of objectivity and reproducibility, the small number of characteristics that the human eye can detect within a reasonable period of time, and the considerable variability that exists between operators [1], as well as the examiner’s inability to assess any more than the small sampled sections of a slide that fall within single visual fields. Moreover, it is extremely inefficient from a time management perspective. Not only are human eyes limited in the number of characteristics they can seek in a given visual field at any given time, they also often must cipher through volumes of normal tissue to identify any area with pathology. For example, of all the prostate biopsies performed in the USA to detect malignancy, only 20 % reveal any clinically relevant pathology [2], and this low percentage persists even in selected patients with abnormal digital prostate examinations and increased serum markers such as prostate-specific antigen [3]. Similarly, of the approximately one million breast biopsies performed each year in the USA, only between 20 and 30 % demonstrate any evidence of malignancy [4]. What these numbers mean is that, even in highly selected patients, pathologists are spending the vast majority of their time assessing normal tissue.

In an attempt to more effectively and efficiently utilize digital whole slide images (WSIs), over time, there has been a huge push toward computerassisted diagnosis and computer image analysis, conjoined concepts that hearken back to the initial use of digital mammography in the early 1990s [5], a practice that itself has evolved to widespread use for the clinical detection of breast cancer across the USA [6]. In particular, over the past decade, the use of computers to assist in clinical diagnosis has blossomed into the evaluation of WSI, where it is increasingly allowing for the objective, rapid, and reproducible evaluation of numerous cellular and extracellular characteristics. Today, in excess of one hundred different cellular and extracellular characteristics can be assessed almost simultaneously through parallelization. The small individual units are generally referred to as superpixels or sometimes image objects. They are in effect polygonal parts of a digital image that are larger than a normal pixel and rendered in the same color and brightness. Moreover, WSI analysis allows for comparison of all these characteristics within a given superpixel against neighboring superpixels. To achieve this, a complex series of steps must be undertaken to convert visual data to digital and then to statistically interpretable numerical data, many of them using mathematical models and algorithms that allow for specific quantification and analyses. One component of this process is that stains can be both detected and quantified [7, 8]. This is accomplished using recent innovations such as



  • automated histopathology pattern recognition [9]


  • color enhancement and standardization techniques [1012]


  • color content analysis that allows for the detection and quantification of histochemical stains [13]


  • image microarrays (IMA) and multiplexed biomarker testing

This all is done, so several tissue characteristics, biomarkers, or stains can be sought and detected on the same slide, thereby replacing the tedious-to-make and difficult-to-maintain cell blocks of traditional microscopy [14, 15].

Among numerous other uses, these algorithms allow for the (semi-)automatic detection and characterization of cancerous cells [16], potentially replacing the fastidious manual searches of pathologists that, as stated earlier, may detect nothing but normal tissue up to 80 % of the time. They also are being utilized to characterize cancers, guide their treatment, and predict prognosis. For example, automated image analysis of routine histological sections is now being used to detect and quantify the expression of human epidermal growth factor receptor (HER2) in breast cancer, since over-expression is associated with an increased risk of recurrence and poor outcomes and predicts responsiveness to trastuzumab, a monoclonal antibody that targets the HER2/neu receptor [17]. Similarly, immunohistochemistry techniques have been combined with digital pattern recognition-based image analysis to identify a specific phenotype of colorectal cancer [18]. This is but scratching the surface, however. Given the tremendous number of diseases and the even greater number of histological markers of disease involving nuclei, cytoplasm, and cell membranes (for example, the detection of breast cancer cells protein receptors such as ER, PR, HER2, Ki-67, and P53), as well as extracellular substrate, there continues to be a huge call for improved informatics tools to ease the massive-scale visualization and analysis of data.

Nowadays, multipurpose tools and programming libraries such as Openslide [19, 20] and Bio-formats [21] aim to assist users and developers with reading an increasing number of WSI formats. Once the WSI is opened and accessible, another major challenge is to optimize the speed of image analysis algorithms performed on each and every region of interest in the WSI. The speed issue is critical because of the tremendous volume of slides that is generated in current clinical practice. For example, Isaacs et al. [22] reported that their facility processed roughly 15,000 slides daily. If the average per-image processing time of 60–90 s reported at Massachusetts General Hospital [23] is a reflection of processing elsewhere, any facility averaging 15,000 images per day would require between 10 and 15 images be processed continuously, 24/7/365, to keep up with demands.

As described in the previous section, concerns regarding image processing speed have led to a proliferation of WSI management systems designed to enhance the capture, storage, retrieval, and dissemination of virtual slides and specimens [2427]. Moreover, cloud-based platforms such as Histobox (https://​www.​histobox.​com) and Simagis (http://​www.​simagis.​com) are increasingly being used to perform high-demand whole slide imaging tasks on servers other than the computers that pathologists use. However, markedly enhancing the speed of WSI diagnostics via the use of computer-assisted analysis is central to any attempts to significantly reduce processing times further. Moreover, accuracy must be maintained, if not augmented. To achieve both of these ends, speed of analysis and accuracy of results, numerous image algorithms are already being utilized on WSI, some of which starts even before capturing the image is complete.

As the name implies, image preprocessing is one of the earliest steps and, in fact, consists of several steps. One essential component is the normalization of color and degree of illumination. This step is required because histopathological assessment invariably relies on the application of various dyes and immunofluorescent stains and counterstains to identify specific cellular and extracellular components and markers of disease, and this application is never entirely homogeneous. Similarly, image scanning is typically non-uniform. To reduce image-to-image variation and the blurring of clear thresholds, these discrepancies can be minimized in a number of ways that include using calibration targets, estimating the illumination pattern from a series of images by fitting polynomial surfaces, and utilizing image gradients estimated in CEI-LUV color space [2833]. Another strategy involves matching histograms of the images using a variety of specialized software packages that have been developed to adjust for both spectral and spatial illumination [34].

A second essential component of image preprocessing is compensating for tissue auto-fluorescence, an issue that arises in various scenarios, but notably in retrospective studies involving formalin-fixed, paraffin-embedded tissue sections [2]. For this, a multi-step algorithm involving two-stage tissue dye application and several mathematical transformations has been developed [29, 35, 36]. The end result is an image in which areas of high auto-fluorescence, such as blood cells and fat, are removed, allowing for a much clearer depiction of the disease marker(s) or other tissue characteristics being sought and, hence, for their detection, enumeration, and characterization.

After image preprocessing, a second essential step in the detection of pathology for computer-assisted diagnosis is to automatically detect certain histological structures, starting with larger tissue structures like the mucosal layer in the colon, proceeding to tissue cells and migratory cells like leukocytes and lymphocytes, and then to smaller intracellular structures like mitochondria and nuclei; and to be able to determine their number, size, shape, and other morphological features [2]. There is a problem with the standard for identification and quantitation. A mitochondrion in cross section looks very different from a mitochondrion in longitudinal section. This is even more so if sliced obliquely. Also, variations in the location of subcellular components in the z-axis generate variations in size for same-sized objects. As stated above, this all must be accurate, a goal that still warrants considerable work for certain structures and settings. To date, for example, count accuracy rates have varied from as low as 60 % to as high as 98 %, depending upon the methods used, the tissue being studied, and the specific feature being counted (e.g., nuclei versus mitoses) [3741].

Segmentation is one process by which specific cytological structures are identified and can be restricted to specific structures like nuclei or mitochondria [42, 43], or be more global to include entire cells or tissues [2, 44], whereby either different algorithms are utilized to identify different structures or the same algorithm is used in different modes. Similarly, structures can be identified by seeking specific markers within them, or via computational methods, like Hessian matrix Eigenvalues for curvature to distinguish between ridge-like membrane structures and more amorphous and rounded (blob-like) nuclear structures [2]. For most forms of cancer, for example, it is critical to delineate epithelial from stromal and connective tissue, which can be accomplished by enumerating specific epithelial cell markers. For example, Sharangpani et al. [45] have used imaging algorithms that incorporated both colorimetric (RGB) and intensity (gray scale) determinations to quantify estrogen and progesterone receptor immune-reactivity in human breast cancer tissue.

Another step, once basic structures are identified, is feature extraction, which is generically the process of simplifying the resources required to accurately describe a large set of data. When analyzing complex data, one major challenge stems from the number of variables involved. Analysis involving a large number of variables generally requires either extensive memory and computational power or some classification algorithm that over-fits the training sample and generalizes poorly to new samples. With feature extraction, variable combinations are generated to circumvent this challenge. In WSI analysis, this involves extracting specific object-level features of identified structures, like their area, various incorporated shapes (e.g., elliptical, convex), center of mass, optical density, fractal dimension, and image band intensity, among many others. Structural information can be further categorized graphically by examining spatially related features to define a large set of topological elements and thereby identify tissue organization, including the clustering of cells and tissue characteristics around such clusters, like the number of nodes, edges, triangles and k-walks, spectral radius, Eigen exponent, roundness factor, and homogeneity. Such a stereological assessment of structures is associated with tremendous inter-rater variability when performed manually [46]. Digitally, cell graphs can be constructed in both two and three dimensions, and the resulting models then used for a variety of purposes such as differentiating different cell types within a given tissue by modeling the extracellular matrix [47] and determining cancer grade [48].

Multi-scale feature extraction mimics the human approach to visualizing a slide, which typically entails adjusting resolution to view different characteristics. For example, at low resolution, the overall topology of the slide is seen; at medium resolution, larger structures like nuclei can be detected; and at high resolution, the morphology of more specific histological structures like nucleoli, mitochondria, and endoplasmic reticulum can be delineated [2]. This same approach can be used digitally, starting with the lowest resolution and progressing to higher levels of resolution for more detailed information for analysis [49, 50]. To achieve this, Sertel et al. decomposed images into multi-resolution representations using a Gaussian pyramid. They followed this with sequential color-space conversion and feature construction and then by feature extraction and selection at each resolution level. Classification labels (e.g., differentiated vs. undifferentiated) then were assigned to each image tile and the tiles combined to form an overall classification map [2]. Using this approach, Doyle et al. [51] were able to accurately detect areas of malignancy in prostate tissue samples.

Algorithms also have been developed, modified, or adopted from other applications for a number of other purposes including



  • feature selection—the process of reducing the number of variables being analyzed by identifying those features that are most diagnostically relevant [5260];


  • dimensionality reduction—which similarly reduces the number of variables being considered using statistical tools such as principal component analysis [61], linear discriminant analysis [62], and independent component analysis [63] to handle especially large numbers of variables [6467];


  • manifold learning—which is one of a large number of statistical manipulations designed to handle data that require more than two or even three dimensions to be represented, utilizing the assumptions that (1) the data of interest lie on an embedded nonlinear manifold within a higher dimensional space but also that (2) the manifold can be reduced, allowing for the data to be visualized in a lower dimensional space.

Such algorithms include graph embedding constructs, Gaussian process latent variable models, and diffusion maps [6872], among others.

All of the above-mentioned tools have been used outside of histopathology for other types of image, in medicine most often radiographic, but also with such functions as facial recognition programs [73]. Where histopathology tends to differ, in terms of its relevant information, is in the tremendous density of data that must be detected and analyzed; this is where various classification and subcellular quantification tools become most useful. Multiple classifier and learning ensemble systems work on the premise that the accuracy of identification is accentuated by using multiple instead of single classifiers, both by limiting bias and reducing the high degree of variance that sometimes exists with a single model [2]. As with many of the previously mentioned procedures, multiple classifier and learning ensemble systems rely on a variety of statistical manipulations such as principal component and linear discriminant analysis, but also on techniques such as the kernel function, which allows for data to be projected into high-dimensional space. Numerous examples exist of these techniques being used to accurately diagnose a variety of cancerous lesions, including prostate cancer [70, 7476], adenocarcinoma of the colon [77, 78], meningioma [79], malignant mesothelioma [80], breast cancer [80, 81], and lung cancer [82].

Even with all of these advancements, and with many more on the way, another perhaps final major obstacle that remains in the way of digital pathology’s widespread adoption is the lack of standardization that exists in the field. In light of this, national and international efforts are being undertaken to standardize each stage, orchestrated by organizations such as the International Academy of Digital Pathology, the College of American Pathologists’ Diagnostic Intelligence and Health Information (DIHIT) Committee, and EURO-TELEPATH, the primary telepathology network in Europe [8386]. Standardization is critical not only for diagnostics, but also given the widespread acceptance digital pathology is now receiving as an educational tool [8691]. Its advantages in terms of creating a permanent, easily accessible and readily transferrable system of pathology slide and specimen archiving also are clear.

Mar 17, 2016 | Posted by in COMPUTERIZED TOMOGRAPHY | Comments Off on Image Analysis

Full access? Get Clinical Tree

Get Clinical Tree app for offline access