Desktop version

Home arrow Health

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Performance Evaluation

Figure 2.6 displays the methodology by which classification error of a designed classifier is evaluated. The results of the evaluation are used to improve the classifier design. Techniques appropriate for the performance evaluation are vital because the amount of data available for the classifier learning and for the performance evaluation is limited. If the performance of a classifier is evaluated by testing it with a dataset that is also used for the construction of the classifier, then the performance evaluation will be biased; in many cases, the classification error will be underestimated. For achieving unbiased performance evaluation, a и-fold cross validation or a bootstrap method is employed.

n-Fold Cross Validation

First, a set of samples is divided into n subsets. Then n 1 subsets are used for the learning, and the remaining one subset is used for the performance evaluation. In subsequent steps, the subsets are shuffled and combinations of learning and the evaluation are repeated n times. Assuming that the error probability in each test step is ei, the generalization error of the classifier is estimated as (Xi ei)/n. When the number of samples is equal to the number of subsets (i.e., each subset contains only a single sample), the method is called the leave-one-out method or the jackknife method. The leave-one-out method often results in a more satisfactory evaluation [34], though it is more computationally complex especially when the amount of data is large.

 
<<   CONTENTS   >>

Related topics