A classifier decides the class of a given input datum. Assume that there are only two classes, w1 and w2. Then, a block diagram of such a classifier has the form shown in Fig. 2.5. Such a classifier first computes some feature of the input data, x, and assigns the datum to the class w1 if the sign of g(x) is positive. According to , the following three different approaches work for the construction of classifiers:
1. Construction of a Generative Model
In this approach, a posterior probability distribution of each class is computed as
where all of the terms on the right side are estimated from a set of training data. Once the posterior probability distributions of all classes are computed, a classifier can be constructed by following the Bayes’ decision theorem. This approach is called generative because input data can be artificially generated based on the estimated distributions.
2. Construction of a Discriminative Model
In a discriminative model approach, the posterior probability distribution of each class is directly estimated from a set of training data, and a classifier is constructed based on Bayes’ decision theorem.
3. Direct Construction of Classifier
In this approach, resultant classification functions output a class of an input datum not necessarily based on the input data’s posterior probability distributions. Such classification functions can be constructed by using training data with their desired outputs.
For example, the function g(x) in Fig. 2.5 can be designed as follows, employing the first or second approach:
H. Hontani et al.
Classifiers are constructed from sets of training data and the training process is called learning. A classifier has parameters whose values are estimated from a set of training data in the learning process. In supervised learning, for example, a label of a desired class is assigned to each set of the training data, and the parameter values are estimated so that the outputs of the classifier are consistent with the labels. In the first two approaches above, the parameters of the classifier describe the probability distributions of data and of classes. In the last approach, the classification function is parametrically represented, and the parameter values are estimated so that the probability of the misclassification is minimized.
A flowchart of the process of classifier design is shown in Fig. 2.6, which appeared in . In the flowchart, gathered data are first normalized and/or registered together. This important step is not easy. When a classifier of character images is constructed, for example, images of characters should be gathered and the
Fig. 2.6 A flowchart of the process of classifier design appeared in  character images should be normalized by bounding each character with a rectangle followed by resizing all of the rectangles to identical dimensions.
This normalization cancels the variations in character images with respect to the locations, sizes, and patient position, and a classifier can be constructed to classify each given character image based only on the difference of the shapes/patterns intrinsic to each character. This normalization/registration of gathered data is required not only for the construction of classifiers but also for the construction of regressors. The training data of organs need to be normalized/registered when constructing computational anatomical models that are used for medical image analysis. To cancel the differences of the locations, sizes, and positions of patients’ bodies, the gathered training images must be non rigidly registered: Each voxel in a training image is matched to a voxel in each of the other training images from an anatomical point of view, or each point on the surface of a training organ region is matched to a point on the surface of each of the other training organ regions. For the former nonrigid registration, anatomical landmarks are essential and will be described in Sect. 2.3.3.I. For the computation of the matching of surfaces, diffeomorphism supplies the mathematical foundation and will be described in Sect. 2.3.4. After normalization/registration is successfully applied to all of the data, the next step is the process of data structure analysis followed by classification design.
One of the main purposes of data structure analysis is to determine the representation of targets. In , a representation is defined as “a formal system for making explicit certain entities or types of information, together with a specification of how the system does this.” The result of using a representation to describe a given entity is called a description of the entity in that representation. Feature extraction from measured data, for example, is a representation of the data. Applying different representations to given measured data results in different descriptions. For constructing recognition systems, representations that improve the separability of the descriptions of data into different classes need to be employed. Recognition problems or regression problems should be solved when segmenting organs in given medical images. The performance of the segmentation also varies depending on the representations of targets. A classifier/regressor processes the descriptions and hence should be designed based on the representation of measured data as is shown in Fig. 2.6.