In order to evaluate the efficiency of our proposed algorithm, we use two publicly available HEp-2 cells datasets as described in Sect. 1.3.

Experiment Setup

In preprocessing, HEp-2 cell images are converted to grayscale images. Our automatic classification system firstly extracts two kinds of image descriptors: PLTP-SRI and SIFT-BoW. With respect to the parameters of the number of LTP neighbor pixels P, we choose P = 4. The dimension of PLTP-SRI is S ? 2^{P}(2^{P} + 1), where S is a constant related to the following spatial pyramid structure, so the higher P needs much more memory and time to calculate. We extract PLTP-SRI from each patch of the image with different parameters. The scale of LTP radius, R, the interval of LTP pairs, d, the threshold values, th, and the spatial pyramid scheme, t, are four important parameters to be considered. We will study the influence of these parameters for staining pattern classification in Sect. 6.3.4.

Dense SIFT features are extracted at single-scale from densely located patches of grayscale images. The patches are centered at every six pixels and with a fixed size of 18 x 18 pixels. The codebook is generated by using k-means clustering based on the patch-level SIFT features of all the training images. Based on the pre-trained codebook, SIFT features are quantized to codes by some specific coding algorithm as mentioned in Sect.4.2 LSC algorithm is chosen in our experiments due to its computationally efficiency and superior performance for HEp-2 cells classification. With respect to the spatial pyramid structure on BoW framework, we choose three levels t = (1, 2, 3), which is a common choice in the academic field. The images are divided into three increasing finer subregions, i.e., 2^{0} x 2^{0}, 2^{1} x 2^{1} and 2^{2} x 2^{2}. In each subregion, we employ max-pooling strategy to illustrate the characteristic in the corresponding feature space.

Table 6.1 Parameters for comparative algorithms

Algorithm

Parameters (P, R) or

(P, R, d)

Our algorithm

(4, 1,2), (4, 2,4), (4,4, 8)

CoALBP

(4, 1,2), (4, 2,4), (4,4, 8)

RIC-LBP

(4, 1,2), (4, 2,4), (4,4, 8)

CLBP

(8, 1), (12,2), (16, 3)

LBP

(8, 1), (12,2), (16, 3)

To evaluate the performance of the proposed algorithm, we compare it with CoALBP, which is the best performing algorithm in the ICPR’12 contest [13], RIC- LBP [14], completed LBP (CLBP) [15] and multi-resolution LBP [16]. Table6.1 shows the parameters used for each algorithm in our experiments. Additionally, we evaluate SIFT-BoW and PLTP-SRI which are components of our proposed algorithm.

We report classification accuracy at the cell level and at the image level to get comprehensive assessment. In our experiments, we use the performance measures accuracy and sensitivity at the cell level. At the image level, the classification accuracy is the percentage of slide images correctly classified.