Since the invention of optical coherence tomography (OCT), a number of scanning mechanisms have been proposed to improve the resolution and scan speed . In recent years, OCT has been increasingly used in ophthalmologic examinations for diagnosis of diseases, including, but not limited to, macular degeneration and glaucoma. The spectrum of clinical applications of OCT scanning has rapidly widened. An endoscope, a laparoscope, and a catheter have been combined with an OCT scanner, and their clinical usefulness was proved not only in ophthalmology but also in cardiovascular and digestive surgery.
This section focuses on OCT for fundus examination. Quantitative measurement of intraretinal layers in an OCT volume can be useful in the diagnosis of diseases, such as age-related macular degeneration (AMD), glaucoma, and symptomatic exudate-associated derangement (SEAD). Consequently, most of the image processing algorithms developed so far for an OCT volume extract intraretinal layers in their initial process. This section explains anatomy in a retinal OCT image and image processing algorithms followed by CAD with OCT (see review paper  for other topics).
Retinal Anatomy on OCT Figure 3.47 is a retinal tomography image centered on the macula, scanned by an OCT scanner during a fundus examination . The retinal layers include a nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), external limiting membrane (ELM), photoreceptor inner and outer segments (PR IS, PR OS), and retinal pigment epithelium (RPE). The concave part at the center of the image is the fovea. The OCT image depicts the anatomy of about ten retinal layers to which the horizontal axis of the image is roughly parallel. Note that the relationship between a layer in an OCT image and an anatomical layer might not be one-to-one correspondence because of the imaging limitations of
Fig. 3.47 (a) Ultrahigh resolution (UHR) OCT, (b, c) macular OCT images (Figure 1 of Ref. )
resolution and signal-to-noise ratio (SNR). The gray values of layers are similar to each other. In addition, vessels and hard exudates absorb and/or reflect near-infrared light and decrease the gray values of the regions deep to them, resulting in shadows and artifacts in the image. Consequently, it is a difficult task to recognize individual retinal layers on OCT images. Note that Fig. 3.47 is a pseudo-color display of an OCT image frequently used in clinical situations.
Intraretinal Layer Segmentation in OCT Retinal layer segmentation is the most popular topic in the field of image analysis of OCT. It is, however, difficult to carry out retinal layer segmentation owing to the low SNR. To overcome the problems and recognize thin layers, noise reduction and prior knowledge of CA are essential.
The pioneering study of retinal layer segmentation was done by Hee , who applied a one-dimensional edge detector to the A-scan direction of an OCT image and proposed an algorithm to measure the thickness of the retinal nerve fiber layer and the whole retina. This process is sensitive to noise because no anatomical features were introduced. Another study  presented an algorithm with an MRF to increase its robustness against noise. Subsequently, multilayer segmentation algorithms based on an adaptive thresholding technique, an edge detection process, or a texture analysis have been proposed [25, 50, 74, 109, 133, 319]. To reduce the noise, denoising algorithms, such as a complex diffusion process  or a coherence-enhancing diffusion filtering , have been employed.
In contrast, a spline-based active contour model  and a level set-based segmentation algorithm  achieved higher performance in segmentation than conventional algorithms, where minimization of a shape energy defined by curvature easily achieved a smooth surface, which is a favorable feature from the point of view of anatomy. The mean Dice coefficient between a true and an extracted layer was 0.85.
Kajic et al.  proposed the use of an AAM for multilayer segmentation of the eight layers from NFL to RPE. First, layer boundaries with relatively strong edges, i.e., upper boundaries of NFL and CL and the lower boundary of RBE, were found using an adaptive thresholding method followed by polynomial fitting. Using manually segmented training cases, the shape model was constructed by sparsely sampled distances (26 points) of eight boundaries from the top (NFL) boundary. The texture model consisted of four features from each of eight layers, including the mean pixel value in the original image, mean and standard deviation of the median filtered image, and the mean of multiple scale edges sampled along the boundaries. Rather than PCA, a neural network was used for dimensionality reduction of the original shape feature space from 208 to 12 and the texture feature space from 32 to 2. The original AAM would generate a new image with a texture learned from the texture variation and compute the distance between the synthesized image and the test image. Their method, instead of a pixel-wise comparison, used the layer boundaries produced by the model to compute texture features during optimization and compared it to the expected texture properties of each layer. The objective function includes a term that penalizes deviations from the boundaries determined by the adaptive thresholding and a term from the AAM to constrain the optimization process. In addition, instead of starting from the mean model, the most similar model was selected based on the distance between the top and bottom boundaries as well as the ratio of foveal pit distance to the greatest thickness, and it was used as the initial model.
In their subsequent study, a variant of AAM was applied for segmentation of the choroidal layer . The model was constructed in a similar way but used distances from the lower boundary of RPE rather than distances from NFL. In addition, a blob detector was employed in the objective function. The multiple thresholding technique was applied to detect blobs that correspond to vessel cross sections. For optimizing the shape, the algorithm tried to maximize the ratio of the choroidal area covered by blobs to the total area of the choroid and the post-choroidal region. The authors concluded the method was successful for this relatively difficult task.
Rathke et al.  proposed the combination of the local appearance model and shape model for segmentation of nine boundaries, b1, ..., b9, or NFL, GCL + IPL, INL, OPL, outer nuclear layer and inner segment (ONL + IS), connecting cilia (CL), OS, RPE, and choroid. In constructing the appearance model, sample patches s(i, j) of 3 x 15 pixels were drawn from labeled images for each of 19 classes corresponding to the ten layers and nine boundaries. Using the training cases, a class-specific density N(s; ^k, ©k_1) for each k can be estimated with mean parameter ^k and sparse precision matrix ©k by applying a lasso penalty . Given an image I, class-conditional likelihood of Ij is defined as
For each pixel, the local class variable mij can then be determined by
using uniform p(my). The shape model is constructed by the continuous height values of all boundaries for all image columns j. When applying the model, initial estimations of the boundary locations are made by using the appearance model and the shape prior of column j, i.e., marginalizing out all other columns. However, they showed that when the conditional probability was iteratively updated by using the predictions of bjn for other columns, the result could be improved.
Recent important studies have been based on graph cuts, which can find a global minimum instead of the local minimum from which the active contour model and level set models suffer. Garvin et al.  constructed a 3D graph in which the spatial relationship between neighboring layers was embedded. Multiple layers were segmented simultaneously by minimizing a defined cost function based on edge/regional image information, a priori surface smoothness, and interaction constraints. Six layers, or seven boundaries, were extracted from a healthy eye, and the average surface error between extracted and true boundaries was 5.69 p,m. The same research group presented a multi-scale graph cut approach which extracted ten layers, or 11 boundaries, with a 5.75 p,m surface error (Fig. 3.48) .
Computer-Aided Diagnosis of Retinal OCT and Other Topics The aforementioned multilayer segmentation algorithms are useful for CAD of retinal OCT, for example, computer-aided staging and evaluation of AMD treatment, and assessment of SEAD and drusen [74, 75, 233]. Some studies have suggested an association of layer’s thickness with the diseases. Dufour et al.  proposed the use of SSM for detecting pathology. Drusen found in patients with AMD can be detected as a displacement of the OS layer. The model was constructed by using a simple grid around the fovea without requiring anatomical landmarks. The model was deformed to fit a new segmentation result, allowing 99% of the variation encountered in the normal training cases. For every landmark, the residual fitting errors between the deformed model and the segmentation result were calculated, and the map of
Fig. 3.48 Segmentation results of 11 retinal surfaces (ten layers). (a) X-Z image of the OCT volume. (b) Segmentation results, nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), outer limiting membrane (OLM), inner segment layer (ISL), connecting cilia (CL), outer segment layer (OSL), Verhoeffs membrane (VM), and retinal pigment epithelium (RPE). The stated anatomical labeling is based on observed relationships with histology although no general agreement exists among experts about precise correspondence of some layers, especially the outermost layers. (c) 3D rendering of the segmented surfaces (N, nasal; T, temporal). (Figure 2 in Ref. )
abnormality measures was determined by the fitting error normalized by the natural residual error in the normal samples.
Besides retinal layers, we can find optic nerve head and vessels as important anatomical structures in a retinal OCT. Herzog et al.  proposed an extraction algorithm for the optic nerve head in OCT images. The cup-disc ratio was evaluated based on the extracted optic nerve head . Lee et al. and Wehbe et al. [173, 304] proposed vessel segmentation algorithms from an OCT image, in which they measured blood flow velocity based on the segmentation result .