# Bootstrap Method

A bootstrap dataset is created by randomly selecting *m* data from a set of the data with replacement. Selecting the bootstrap data set *B* times independently enables evaluation of the classification error as X)_{i} e*'-**jB**,* where e0 denotes the classification error evaluated by using the *i*-th bootstrap data set.

# Examples of Classifiers

Many methods for constructing classifiers have been proposed. In the followings, some of the most popular and powerful methods for the classification are briefly described.

# Decision Tree

Decision trees are represented by tree-structured directed graphs (see Fig. 2.7) [37]. A decision tree enables classification of input data through a sequence of questions along the tree. In the tree, each internal node represents a test, and each directed branch links a parent node to a child node and represents a result of the test at the parent node, with each leaf node representing a class. Starting from a root node, the data are classified by applying a test represented by the current node and by moving toward the leaf node by pursuing each branch representing the result of the test at each node. Several algorithms may be employed, such as classification and regression trees (CART) [38], ID3 [37], and C4.5 [38], for constructing decision trees from a set of training data. Regression trees can also be constructed [38] from sets of training data for solving regression problems.

Fig. 2.7 An example of a decision tree of *Saturday morning attributes* appeared in [37]. *P* denotes a positive class of the mornings suitable for some activity and *N* denotes a negative class