Desktop version

Home arrow Health

Statistical analysis

Finally, the normalized and segmented images are smoothed by convoluting isotropic Gaussian kernel [13,108], so that the intensity of each voxel is replaced by the weighted average of the surrounding voxels, in essence blurring the segmented image. The size of the smoothing kernel depends on the size of expected regional differences, which can vary across studies [150, 239, 311]. Smoothing before statistical testing has three advantages. First, the smoothing has an effect of making the data more fitted to the Gaussian field model and more normally distributed, thus increasing the validity of parametric statistical test. Second, the smoothing compensates for spatial normalization error and decreasing intersubject variability [13, 242]. Third, the smoothing reduces the effective number of statistical comparisons and thus increases the sensitivity to detect changes by reducing the variance across subjects, although excessive smoothing will diminish the ability to accurately localize changes in the brain. Although these processing steps (normalization, segmentation, and smoothing) are necessary for the analysis of data across subjects [13, 242], they can also introduce errors and variability into the analysis, which can reduce sensitivity. For example, VBM cannot distinguish between real changes in tissue volume from local misregistration of images [14,40]. It should be noted that normalization accuracy and, thus, sensitivity will vary across regions.

Statistical analysis using the general linear model and the theory of Gaussian random fields is performed to identify regions of gray matter or white matter that are significantly related to the effects under study [13,90, 91]. The analysis is a standard t test and F test extended into all voxels in 3D space. In SPM, the design matrix for statistical analysis is composed of two contrasts comparing the smoothed gray matter or white matter. These analyses generate statistical maps showing all voxels of the brain that disprove the null hypothesis and show significance to a certain p value. These maps are often shown as color maps with the t statistic scale (Figs. 3.19, 3.20 and 3.21). Because the statistical tests are performed across a quite large number of voxels, it is important for the results of the analyses to correct for multiple comparisons to prevent the false-positive results by chance. The classical approach to the multiple comparison problem is to control the family-wise error (FWE) rate [90], and the most common way to control the FWE rate is with Bonferroni’s correction. The more conservative method is false discovery rate (FDR) correction [104]. The FWE correction controls the chance of any false positives across the entire volume, whereas FDR correction controls the expected proportion of false positives among suprathreshold voxels. A small volume correction is often used to reduce the number of comparisons to be performed and increase the chance of identifying significant results in particular regions of interest. This method typically involves defining regions of interest over particular brain structures and analyzing only these regions. The placement of these regions should be hypothesis driven and ideally based on previous work.

< Prev   CONTENTS   Source   Next >

Related topics