Home Computer Science Computational Diffusion MRI: MICCAI Workshop, Athens, Greece, October 2016
For the synthetic dataset, Fig. 1 suggests that the performance of both algorithms strongly depends on the SNR, while the number of gradient directions exerts little influence. Moreover, the MAP approach outperforms DNN in terms of the NMSE for more than 30 gradient directions and a noisy signal, while its NMSE increases for less gradients. In case of noise the DNN outperforms the MAP approach for every number of gradient directions. This qualifies the DNN signal for describing a single shell, even for settings with a limited number of gradient directions and in case of high noise.
Considering the resulting real data NMSEs for the same input and target shell in Tables 2 and 3, the same effect can be seen since both algorithms achieve similarly good results, i.e. the performance is hardly influenced by the number of gradient directions. Comparing the resulting augmented data, it can be observed that the results of both algorithms diverge as the shell distance increases. In those cases, the MAP approach results in a much higher NMSE than the DNN utilizing 90 as well as 15 gradient directions. Moreover, the MAP approach completely fails in order to predict the 2nd shell using the 3rd shell as input. In this case the NMSE is higher than the NMSE of d without any prediction. Using two shells as input increases the performance of the DNN for each combination, while the MAP algorithm only improves if the 1st + 3rd shell or the 2nd + 3rd shell are used as input shell in case of 90 gradient directions. Though, the MAP approach only increases its performance in order to predict the 2nd shell given the 1st + 3rd shell as input, if only 15 gradient directions are available.
A similar behavior can be seen in Fig. 2. The variance is low if the input and target shell are identical and increases as the distance between two shells grows. As before, the MAP algorithm stabilizes if more shells are used as input since the variance, median and quantile NMSE decrease. However, only three different shells are available in the used HCP dataset.
Overall, it should be noted that only a subset of 15 gradient directions is needed for augmentation as the NMSE is only slightly higher, which reduces the scan time to For example, from data acquired with 15 gradient directions on the 2nd shell, the 1st shell can be predicted with only 2.01% NMSE while the required scan time is theoretically reduced to 50% or to 8.33% considering the original dataset with 90 gradient directions, respectively.
In terms of prediction speed, the DNN can predict one shell per voxel with ^ 23,000 voxels per second, whereas the MAP algorithm achieves a maximum rate of 150 voxels per second. Though, it should be considered that the MAP algorithm utilizes the CPU, while the DNN is based on a GPU implementation. However, the augmentation using the DNN requires less than one minute for a whole brain scan.
A limitation of this work is that the prediction is only evaluated on the scanner type that was used in the HCP. An augmentation on data from different scanners may require training with a dataset from this specific scanner. Another approach presented in  is to simulate individual synthetic data for a specific scan and to re-train the network utilizing the synthetic dataset. Whether or to which extent scanner-dependency is an issue will be investigated in future work.
|< Prev||CONTENTS||Next >|