Desktop version

Home arrow Engineering arrow Cellular Image Classification

Source

Discussion

To analyze our proposed algorithm more comprehensive, we further evaluate its performance with respect to four important parameters, that is the scale of LTP radius, R, the interval of LTPpairs, d, the threshold value, th, and the spatial pyramid structure, {. In addition, we investigate into the classification performance affected by the codebook size of the BoW framework. In the following, we use the classification accuracy at the cell level as measurement.

The scale of LTP radius and the interval of LTP pairs: Table 6.8 shows the classification performance of various (R, d)s with fixed P = 4. Thus, we choose the combination of LTPs with parameters (R, d) = (1,2), (2,4), (4, 8) for both the ICPR2012 dataset and the ICIP2013 training dataset.

The threshold value: Table6.9 summarizes the performance of proposed PLTP- SRI algorithm with different thresholding values on the ICPR2012 dataset and the

Table 6.8 Classification accuracy of PLTP-SRI under various (R, d)s

(R,d)

Accuracy (ICPR2012)

Accuracy (ICIP2013)

(1,2)

65.6

69.2

(2, 4)

63.5

67.8

(4, 8)

61.7

60.6

(1, 2), (2, 4)

69.2

71.1

(1, 2), (4, 8)

65.6

69.9

(2, 4), (4, 8)

62.0

67.0

(1, 2), (2, 4), (4, 8)

70.2

74.6

Table 6.9 Classification accuracy of PLTP-SRI with different thresholds for ternary pattern calculation

th

0

1

2

3

4

5

Acc (ICPR2012) (%)

70.2

67.2

63.8

59.1

57.8

51.6

Acc (ICIP2013) (%)

71.5

73.1

74.6

71.0

70.8

66.3

Table 6.10 Classification accuracy of PLTP-SRI with different levels of spatial pyramid

th

1

2

3

4

Acc (ICPR2012) (%)

61.3

68.0

70.2

66.5

Acc (ICIP2013) (%)

64.8

70.2

74.6

72.7

ICIP2013 training dataset respectively. The histograms of the HEp-2 cell images are narrow and centered toward the low side of the gray scale. The difference between the center pixels and the neighbor pixels is very small. Through the optimum value, the extracted feature tends to be less discriminative when the threshold is increasing. It is observed that for the ICPR2012 dataset th = 0 is showing better performance, while for the ICIP2013 training dataset th = 2 is a better choice.

The spatial pyramid structure of PLTP-SRI: with respect to the spatial pyramid scheme, we evaluate the classification accuracy under various levels as shown in Table6.10. Obviously, we choose three levels ( = (1,2, 3), which means that images are partitioned into three increasing finer subregions, i.e., 1 x 1,2 x 2 and 4 x 4. In each subregion, we use max-pooling strategy to illustrate the characteristic in the corresponding feature space.

The codebook size of BoW framework: To evaluate the performance of codebook with various sizes, we chose five increasing sizes: 256, 512, 1024, 2048 and 4096. Figure6.5 represents the impact of codebook size for classification. It can be seen that, the classification accuracy improved with the increment of size from 256 to

Classification accuracy of BoW representation with various codebook sizes

Fig. 6.5 Classification accuracy of BoW representation with various codebook sizes

1024 but it tends to be consistent for larger size from 1024 to 4096. Noted that due to the limited PC memory, we cannot generate the codebook with the size of 4096 for the ICIP2013 training dataset. Considering both efficiency and accuracy, 1024 is a preferred choice.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics