The results indicate that it is possible to use a sparse representation for fibers, with mean reconstruction error slightly above 1.5 mm when using seven non-zero coefficients per fiber. Furthermore, the size of the dictionary has much less impact on the reconstruction quality than T_{0}. In addition, one common dictionary can be used for a group of fiber-sets, which is more efficient and only slightly increase the reconstruction error. The transition from initial 60 dimensional representation to sparse representation with T_{0} = 7 reduces the number of non-zero values by 87% (this includes the norm value saved for each fiber) and thus presents significant reduction in memory requirements. If the fiber set is further compressed, for example using Huffman coding, the sparse sets can achieve much higher compression ratio than the original set, due to much lower entropies of the sparse representations.

A very significant difference of our method from other fiber compression techniques [3] is that it enables the estimation of inter-fiber similarity directly in the compressed space, using our proposed CWDS measure. The calculation of similarities (which can be converted into distances) is a necessary part of many common analysis schemes (comparisons, classifications, etc.) often applied to fiber sets. CWDS allows to perform these tasks without “decompression” of the data.

We have demonstrated the performance of the CWDS measure for both individual dictionaries learned for each fiber-set and for a common dictionary, which allows for inter-set similarity calculations. The value of T_{0} was constrained to 7 in order to achieve the same computational complexity as one can get using cosine similarity in the original space. If more accuracy is needed, it may be achieved with higher T0, of course with additional computational cost. The dictionary learning, although a computationally intensive procedure in itself is performed offline and only once for a group of fiber sets. The computational burden is reduced further by learning from a corset and not the full fiber set.

Future work includes incorporating the presented concept into a full compression pipeline, together with elimination of redundant fibers and additional coding with adaptive bit allocation. We will also endeavor to lower the reconstruction error by incorporating an error constraint into the sparsity framework and by addressing the different fiber lengths during the dictionary learning process.