Home Computer Science Computational Diffusion MRI: MICCAI Workshop, Athens, Greece, October 2016

# Problem Formulation

Given a vector-valued image f of an arbitrary dimension with pixel i 2 {1, ... ,N} consisting of vector f 2 RM, we are interested in restoring its denoised counterpart u by solving the following problem:

The regularization term is in fact a sum of G regularization terms, each of which grouping a set of images. The gth grouping (with associated tuning parameter Ag,l,r), where g = {1,2 , ..., G}, is defined according to a set of weights {wg,m}, where m 2 {1, ..., M}. Channels with wg,m ф 0 are included in the grouping and their weighted framelet coefficients are jointly considered via '2-norm for penalization. The different groupings can possibly overlap, implying each image can be at the

same time considered in different groups. This is in similar spirit as the overlapped

1

group LASSO [13]. We set Xgij = A 2gJ„)2 if Ur ф 0 or XgJ,r = 0 if otherwise. Here A is a constant that can be set independent of the weights.

# Optimization

Problem (15) can be solved effectively using penalty decomposition (PD) [14]. Defining auxiliary variables (vg,mj,r)i := wg,m(Wi,rum)i, this amounts to minimizing the following objective function with respect to u and v := {vg ,m;l,r}:

In PD, we (1) alternate between solving for u and v using block coordinate descent (BCD). Once this converges, we (2) increase д > 0 by a multiplicative factor that is greater than 1 and repeat step (1). This is repeated until increasing д does not result in further changes to the solution [14].

First Subproblem

We solve for v in the first problem, i.e., minv L^(u, v). This is a group '0 problem and the solution can be obtained via hard-thresholding:

where

An 'i version of the algorithm can be obtained by using soft-thresholding instead. Second Subproblem

By taking the partial derivative with respect to u(m), the solution to the second subproblem, i.e., minuL^(u, v), is for each m

where we have dropped the subscript i for notation simplicity. Note that since we have X)l r WjrWir = I, the the problem can be simplified to become

Solving the above equation for u(m) is trivial and involves only simple division.

# Setting the Weights

In setting the weights {wg,m}, we note that the weights should decay with the dissimilarity between gradient directions associated with a pair of diffusion- weighted images. To reflect this, we let G = M and set for g, m 2 {1,..., M} wg,m = eK[(v>Vg)2_1] if v> vg < cos(9) or 0 otherwise, where к > 0 is a parameter that determines the rate of decay of the weight. The exponential function is in fact modified from the probability density function of the Watson distribution [15] with concentration parameter к. Essentially, this implies that for the gth diffusion- weighted image acquired at gradient direction vg, there is a corresponding group of images with associated weights {wg,m}. The weight is maximal at wg;g = 1 and is attenuated when m Ф g. To reduce computation costs, weights of images scanned at gradient directions deviating more than в from vg are set to 0, and the respective images are hence discarded from the group. We set в = 30°.

Found a mistake? Please highlight the word and press Shift + Enter
 Related topics