7

Efficient recursive estimation of the Riemannian barycenter on the hypersphere and the special orthogonal group with applications

Rudrasis Chakraborty; Baba C. Vemuri    University of Florida, CISE Department, Gainesville, FL, United States

Abstract

Finding the Riemannian barycenter (center of mass) or the Fréchet mean (FM) of manifold-valued data sets is a commonly encountered problem in a variety of fields of science and engineering, including, but not limited to, medical image computing, machine learning, and computer vision. For example, it is encountered in tasks such as atlas construction, clustering, principal geodesic analysis, and so on. Traditionally, algorithms for computing the FM of the manifold-valued data require that the entire data pool be available a priori and not incrementally. Thus, when encountered with new data, the FM needs to be recomputed over the entire pool, which can be computationally and storage inefficient. A computational and storage efficient alternative is to consider a recursive algorithm for computing the FM, which simply updates the previously computed FM when presented with a new data sample. In this chapter we present such an alternative called the inductive/incremental Fréchet mean estimator (iFME) for data residing on two well-known Riemannian manifolds, namely the hypersphere S(d)Image and the special orthogonal group SO(d)Image. We prove the asymptotic convergence of iFME to the true FM of the underlying distribution from which the data samples were drawn. Further we present several experiments demonstrating the performance iFME on synthetic and real data sets.

Keywords

Fréchet Mean; Recursive Estimator; Hypersphere; SO(n); Gnomonic Projection; Weak Consistency

Acknowledgements

We thank Prof. David E. Vaillancourt and Dr. Edward Ofori for providing the diffusion MRI scans of movement disorder patients. We also thank Dr. Xavier Pennec and the anonymous reviewer for their invaluable comments in improving this manuscript. This research was in part supported by the NSF grants IIS-1525431 and IIS-1724174.

7.1 Introduction

Manifold-valued data have gained much importance in recent times due to their expressiveness and ready availability of machines with powerful CPUs and large storage. For example, these data arise as diffusion tensors (manifold of symmetric positive definite matrices) [1,2], linear subspaces (the Grassmann manifold) [38], column orthogonal matrices (the Stiefel manifold) [3,911], directional data and probability densities (the hypersphere) [1215], and others. A useful method of analyzing manifold-valued data is to compute statistics on the underlying manifold. The most popular statistic is a summary of the data, that is, the Riemannian barycenter (Fréchet mean, FM) [1618], Fréchet median [19,20], and so on.

FM computation on Riemannian manifolds has been an active area of research for the past few decades. Several researchers have addressed this problem, and we refer the reader to [21,18,22,23,19,2427]. In most of these works, the authors relied on the standard gradient descent-based iterative computation of the FM, which suffers from two major drawbacks in an online computation setting: (1) for each new sample, it has to compute the new FM from scratch, and (2) it requires the entire input data to be stored to estimate the new FM. Instead, an incremental that is, a recursive technique can address this problem more efficiently with respect to time/space utility. In this age of massive and continuous streaming data, samples are often acquired incrementally. Hence, also from the applications perspective, the desired algorithm should be recursive/inductive to maximize computational efficiency and account for availability of data, requirements that are seldom addressed in more theoretically oriented fields.

Recently, several incremental mean estimators for manifold-valued data have been reported [2831,5,32]. Sturm [29] presented an incremental mean, the so-called inductive mean, and proved its convergence to the true FM for all nonpositively curved (NPC) spaces. In [33] the authors showed several algorithms (including a recursive algorithm) for FM computation for data residing in CAT(0) spaces, which are NPC. They also demonstrated several applications of the same to computer vision and medical imaging. Further, in [31] an incremental FM computation algorithm along with its convergence and applications was presented for a population of symmetric positive definite (SPD) matrices. Recently, Lim [30] presented an inductive FM to estimate the weighted FM of SPD matrices. The convergence analysis in all of these works is applicable only to the samples belonging to NPC spaces, and hence their convergence analysis does not apply to the case of the manifolds with positive sectional curvature, which are our “objects” of interest in this chapter. Arnaudon et al. [34] present a stochastic gradient descent algorithm for barycenter computation of probability measures on Riemannian manifolds under some conditions. Their algorithm is quite general as it is applicable both to nonpositively and positively curved Riemannian manifolds.

In this work we present a novel incremental FM estimator (iFME) of a set of samples on two Riemannian manifolds of positive sectional curvature, that is, the hypersphere and the special orthogonal group. Data samples from either of these aforementioned manifolds are very commonly encountered in medical imaging, computer vision, and computer graphics. To mention a few, the directional data that are often encountered in image processing and computer vision are points on the unit 2-sphere S(2)Image [12]. Further, 3×3Image rotation matrices can be parameterized by unit quaternions, which can be represented by points on the projective space P(3)Image, i.e. couples of points and their antipodal point on the three-dimensional unit sphere S(3)Image [15]. Also, any probability density function, for example, orientation distribution function (ODF) in diffusion magnetic resonance imaging (MRI) [14], can be represented as points on the positive quadrant of a unit Hilbert sphere [35,13].

7.2 Riemannian geometry of the hypersphere

The hypersphere is a constant positive curvature Riemannian manifold that is commonly encountered in numerous application problems. Its geometry is well known, and here we will simply present (without derivations) the closed-form expressions for the Riemannian Exponential and Log maps as well as the geodesic between two points on it. Further, we also present the well-known square root parameterization of probability density functions (PDFs), which allows us to identify them with points on the unit Hilbert sphere. This will be needed in representing the probability density functions, namely, the ensemble average propagators (EAPs) derived from diffusion MRI, as points on the unit Hilbert sphere.

Without loss of generality we restrict the analysis to PDFs defined on the interval [0,T]Image for simplicity: P={p:[0,T]R|s,p(s)0,T0p(s)ds=1}Image. In [26], the Fisher–Rao metric was introduced to study the Riemannian structure of a statistical manifold (the manifold of probability densities). For a PDF pPImage, the Fisher–Rao metric is defined as v,w=T0v(s)w(s)p(s)dsImage for v,wTpPImage. The Fisher–Rao metric is invariant to reparameterizations of the functions. To facilitate easy computations when using Riemannian operations, the square root density representation ψ=pImage was used in [13], which was originally proposed in [36,37] and further developed from geometric statistics view point in [38]. The space of square root density functions is defined as Ψ={ψ:[0,T]R|s,ψ(s)0,T0ψ2(s)ds=1}Image. As we can see, Ψ forms a convex subset of the unit sphere in a Hilbert space. Then, the Fisher–Rao metric can be written as v,w=T0v(s)w(s)dsImage for tangent vectors v,wTψΨImage. Given any two functions ψi,ψjΨImage, the geodesic distance between these two points is given in closed form by d(ψi,ψj)=cos1(ψi,ψj)Image. The geodesic at ψ with a direction vTψΨImage is defined as Γ(t)=cos(t)ψ+sin(t)vvImage. The Riemannian exponential map can then be expressed by Expψ(v)=cos(v)ψ+sin(v)vvImage, where v[0,π)Image. The Riemannian inverse exponential map is then given by Logψi(ψj)=ucos1(ψi,ψj)/u,uImage, where u=ψjψi,ψjψiImage. Note that, for the rest of this chapter, we will assume that the data points are within a geodesic ball of radius less than the injectivity radius so that the Riemannian exponential is unique, and hence there always exists the corresponding inverse exponential map. We can define the geodesic between ψiImage and ψjImage by Γψjψi(t)=cos(t)ψ+sin(t)vvImage, where v=Logψi(ψj)Image and t[0,1]Image. We will use the term geodesic to denote the shortest geodesic between two points.

Using the geodesic distance provided previously, we can define the Fréchet mean (FM) [16,17] of a set of points on the hypersphere as the minimizer of the sum of squared geodesic distances (the so-called Fréchet functional). Let B(x,ρ)Image be the geodesic ball centered at x with radius ρ, that is, B(x,ρ)={yS(k)|d(x,y)<ρ}Image. Authors in [39,18] showed that for any xSkImage and for data samples in B(x,π2)Image, the minimizer of the Fréchet functional exists and is unique. For the rest of this chapter, we will assume that this condition is satisfied for any given set of points {xi}S(k)Image. For more details on Riemannian geometry of the sphere, we refer the reader to Chapter 2 of [40] and references therein.

7.3 Weak consistency of iFME on the sphere

In this section we present a detailed proof of convergence of our recursive estimator on the S(k)Image. The proposed method is similar in “spirit” to the incremental arithmetic mean update in the Euclidean space; given the old mean mn1Image and the new sample point xnImage, we define the new mean mnImage as the weighted mean of mn1Image and xnImage with the weights n1nImage and 1nImage, respectively. From a geometric viewpoint, this corresponds to the choice of the point on the geodesic between mn1Image and xnImage with the parameter t=1nImage, that is, mn=Γxnmn1(1/n)Image.

Formally, let x1,x2,,xNImage be a set of N samples on the hypersphere S(k)Image, all of which lie inside a geodesic ball of radius π2Image. The incremental FM estimator (denoted by iFME) mnImage with the given nth sample point xnImage is defined by

m1=x1,

Image(7.1)

mn=Γxnmn1(1n),

Image(7.2)

where Γxnmn1Image is the shortest geodesic path from mn1Image to xnImage (SkImage), and 1nImage is the weight assigned to the new sample point (in this case the nth sample), which is henceforth called the Euclidean weight. In the rest of this section we will show that if the number of given samples N tends to infinity, then the iFME estimates will converge to the expectation of the distribution from which the samples are drawn. Note that the geometric construction steps in the proof given further are not needed to compute the iFME; these steps are only needed to prove the weak consistency of iFME.

Our proof is based on the idea of projecting the samples xiImage on the sphere to the tangent plane using the gnomonic projection [15] and performing the convergence analysis on the projected samples ˜xiImage in this linear space, instead of performing the analysis on the hypersphere. We take advantage of the fact that the geodesic curve between any pair of points on the hemisphere is projected to a straight line in the tangent space at the anchor point (in this case, without loss of generality, assumed to be the north pole) via the gnomonic projection. A figure depicting the gnomonic projection is shown in Fig. 7.1. Despite the simplifications used in the statistical analysis of the iFME estimates on the hypersphere using the gnomonic projection, there is one important obstacle that must be considered. Without loss of generality, suppose the true FM of the input samples {xi}Image is the north pole. Then it can be shown through counter examples that:

  • •  The use of Euclidean weights 1nImage to update the iFME estimates on S(k)Image does not necessarily correspond to the same weighting scheme between the old arithmetic mean and the new sample in the projection space, that is, the tangent space.

Image
Figure 7.1 Gnomonic projection.

This fact can be illustrated using two sample points on a unit circle (S1Image), x1=(cos(π/6),sin(π/6))tImage and x2=(cos(π/3),sin(π/3))tImage, whose intrinsic mean is m=(cos(π/4),sin(π/4))tImage. Then the midpoint of the gnomonic projections of x1Image and x2Image, which are denoted by ˜x1Image and ˜x2Image, is ˆm=tan(π/3)+tan(π/6)2=1.1547tan(π/4)=˜mImage (see Fig. 7.2).

Image
Figure 7.2 Illustration of the counterexample.

For the rest of this section, without loss of generality, we assume that the FM of N given samples is located at the north pole. Since the gnomonic projection space is anchored at the north pole, this assumption leads to significant simplifications in our convergence analysis. However, a similar convergence proof can be developed for any arbitrary location of the FM, with the tangent (projection) space anchored at the location of this mean.

In what follows we prove that the use of Euclidean weights wn=1nImage to update the incremental FM on the hypersphere corresponds to a set of weights in the projection space, denoted henceforth by tnImage, for which the weighted incremental mean in the tangent plane converges to the true FM on the hypersphere, which in this case is the point of tangency.

Theorem 7.1

Angle Bisector Theorem

[41] Let mnImage and mn+1Image denote the iFME estimates for n and n+1Image given samples, respectively, and xn+1Image denotes the (n+1)Imageth sample. Further, let ˜mnImage, ˜mn+1Image, ˜xn+1Image be the corresponding points in the projection space. Then

tn=˜mn˜mn+1˜xn+1˜mn+1=˜mn˜xn+1×sin(d(mn,mn+1))sin(d(mn+1,xn+1)),

Image(7.3)

where d is the geodesic distance on the hypersphere.

Assumption

For the rest of this section, we will assume that the input samples {xi}Image are within the geodesic ball B(x,ρ)Image, where 0<ρ<π/2Image, for some xSkImage. This is needed for the uniqueness of the FM on the hypersphere (see [42]). □

Then we bound tnImage with respect to the radius ρ.

Lemma 7.1

Lower and Upper Bounds for tnImage

With the same assumptions made as in Theorem 7.1, we have the following inequality:

cos(ρ)ntn1cos(ρ)3n.

Image(7.4)

Proof

Lower Bound To prove this lower bound for tnImage, we find the lower bounds for each fraction on the right-hand side of Eq. (7.3). The first term reaches its minimum value if mnImage is located at the north pole and xn+1Image is located on the boundary of the geodesic ball B(x,ρ)Image. In this case, ˜mn=1Image and ˜xn+1=1cos(ρ)Image. This implies that

˜mn˜xn+1cos(ρ).

Image(7.5)

Next, note that based on the definition of iFME, this second fraction in (7.3) can be rewritten as sin(d(mn,mn+1))sin(d(mn+1,xn+1))=sin(d(mn,mn+1))sin(nd(mn,mn+1))=1Un1(cos(d(mn,mn+1)))Image, where Un1Image (x) is the Chebyshev polynomial of the second kind [43]. For any x[1,1]Image, the maximum of Un1(x)Image is reached when x=1Image, for which Un1(1)=nImage. Therefore Un1(x)nImage and 1Un1(x)1nImage. This implies that

sin(d(mn,mn+1))sin(nd(mn+1,mn+1))=1Un1(cos(d(mn,mn+1)))1n.

Image(7.6)

Inequalities (7.5) and (7.6) complete the proof. □

Note that as ρ tends to zero, cos(ρ)Image converges to one, and this lower bound tends to 1nImage, which is the case for the Euclidean space.

Proof

Upper Bound First, the upper bound for the first term in (7.3) is reached when mnImage is on the boundary of geodesic ball and xn+1Image is given at the north pole. Therefore

˜mn˜xn+11cos(ρ).

Image(7.7)

Finding the upper bound for the sin term however is quite involved. Note that the maximum of the angle between mnImage and xn+1Image at the center of the S(k)Image, denoted by α, is reached when mnImage and xn+1Image are both on the boundary of the geodesic ball, that is, α2ρImage. Therefore ρ[0,π2)Image implies that α[0,π)Image. Further we show in the next lemma that the following inequality holds for any α(0,π)Image:

sin(nαn+1)sin(αn+1)ncos2(α2)=ncos2(ρ).

Image(7.8)

From (7.7) and (7.8) the result follows. □

Lemma 7.2

sin(nαn+1)sin(αn+1)ncos2(α2)

Image(7.9)

for any α(0,π)Image.

Proof

Let f=sin(nθ)ncos2(n+12θ)sin(θ)Image, θ(0,α/(n+1))Image, α(0,π)Image, n1Image, and fθ=ncos(nθ)+2ncos(n+12θ)sin(θ)sin(n+12θ)(n+12)ncos2(n+12θ)cos(θ)Image. As θ(0,π/(n+1))Image, by solving the above equation we get θ=0Image. But fθθ|θ=0=0Image. Hence we check fθθθImage: fθθθ|θ=0=n3+1.5n(n+1)2+n>0Image, n1Image. So, at θ=0Image, f has a minimum where θ(0,α/(n+1))Image and f|θ=0=0Image. Thus f0Image as n1Image. Since, for θ(0,α/(n+1))Image, sin(θ)>0Image, fsin(θ)0Image fsin(θ)=sin(nθ)sin(θ)ncos2(n+12θ)Image. Hence sin(nθ)sin(θ)ncos2(n+12θ)0Image. Then by substituting θ=α/(n+1)Image we get sin(nαn+1)sin(αn+1)ncos2(α2)Image. This completes the proof. □

Thus far we have shown analytical bounds for the sequence of weights tnImage in the projection space corresponding to Euclidean weights on the sphere (Eq. (7.4)). We now prove the convergence of iFME estimates to the expectation of distribution from which the samples are drawn as the number of samples tends to infinity.

Theorem 7.2

Unbiasedness

Let (σ,ω)Image denote a probability space with probability measure ω. A vector-valued random variable ˜xImage is a measurable function on σ taking values in RkImage, that is, ˜xImage: σRkImage. The distribution of ˜xImage is the push-forward probability measure dP(˜x)Image =˜x(σ)Image on RkImage. The expectation is defined by E[˜x]Image =σ˜xdωImage. Let ˜x1,˜x2,Image be i.i.d. samples drawn from the distribution of ˜xImage. Also, let ˜mnImage be the incremental mean estimate corresponding to the nth given sample ˜xnImage, which is defined by: (i) ˜m1=˜x1Image, (ii) ˜mn=tn˜xn+(1tn)˜mn1Image. Then, ˜mnImage is an unbiased estimator of the expectation E[˜x]Image.

Proof

For n=2Image, ˜m2=t2˜x2+(1t2)˜x1Image, and hence E[˜m2]=t2E[˜x]+(1t2)E[˜x]=E[˜x]Image. By induction hypothesis we have E[˜mn1]=E[˜x]Image. Then E[˜mn]=tnE[˜x]+(1tn)E[˜x]=E[˜x]Image, hence the result. □

Theorem 7.3

Weak Consistency

Let Var[˜mn]Image denote the variance of the nth incremental mean estimate (defined in Theorem 7.2) with cos(ρ)ntn1cos(ρ)3nImage for ϕ[0,π/2)Image. Then there exists p(0,1]Image such that Var[˜mn]Var[˜x](npcos6(ρ))1Image.

First note that Var[˜mn]=t2nVar[˜x]+(1tn)2Var[˜mn1]Image. Since, 0tn1Image, we can see that Var[˜mn]Var[˜x]Image for all n. Besides, for each n, the maximum of the right-hand side is achieved when tnImage attains either its minimum or maximum value. Therefore we need to prove the theorem for the following two values of tnImage: (i) tn=cos(ρ)nImage and (ii) tn=1ncos3(ρ)Image. These two cases will be proved in Lemmas 7.3 and 7.4, respectively.

Lemma 7.3

Suppose the same assumptions as in Theorem 7.2 are made. Further, let tn=1ncos3(ρ)Image for all n and all ρ[0,π/2)Image. Then Var[˜mn]Var[˜x](ncos6(ρ))1Image.

Proof

For n=1Image, Var[˜m1]=Var[˜x]Image, which yields the result, since cos(ρ)1Image. Now assume by induction that Var[˜mn1]Var[˜x](n1)×cos6(ρ))1Image. Then

Var[˜mn]Var[˜x]=t2n+(1tn)2Var[˜mn1]Var[˜x]t2n+(1tn)21(n1)cos6(ρ)1cos6(ρ)n2+(11cos3(ρ)n)2×1(n1)cos6(ρ)1cos6(ρ)n2+(11n)2×1(n1)cos6(ρ)=1cos6(ρ)n2+n1n2cos6(ρ)=1ncos6(ρ).

Image

 □

Lemma 7.4

Suppose the same assumptions as in Theorem 7.2. Further, let tn=cos(ρ)nImage for all n and all ϕ[0,π/2)Image. Then Var[˜mn]Var[˜x]npImage for some 0<p1Image.

Proof

For n=1Image, Var[˜mn]=Var[˜x]Image, which yields the result, since cos(ρ)1Image. Now assume by induction that Var[˜mn1]Var[˜x](n1)pImage. Then

Var[˜mn]Var[˜x]=t2n+(1tn)2Var[˜mn1]Var[˜x]t2n+(1tn)21(n1)pcos2(ρ)n2+(ncos(ρ))2n2×1(n1)p=(n1)pcos2(ρ)+cos2(ρ)2ncos(ρ)+n2n2(n1)p.

Image

Now it suffices to show that the numerator of this expression is not greater than n2p(n1)pImage. In other words,

(n1)pcos2(ρ)+cos2(ρ)2ncos(ρ)+n2n2p(n1)p0.

Image(7.10)

This quadratic function in cos(ρ)Image is less than zero when

n(1(n1)p/2(n1n)p+1np11+(n1)p)cos(ρ)n(1+(n1)p/2(n1n)p+1np11+(n1)p).

Image(7.11)

The inequality on the right is satisfied for all values of the cos function. Besides, it is easy to see that the function on the left-hand side is increasing w.r.t. n>1Image and hence attains its minimum when n=2Image. This implies that

121p1cos(ρ)ρcos1(121p1)0<p1log2[(1cos(ρ))2+1].

Image(7.12)

Note that p>0Image for all ρ<π/2Image. □

Proof

Convergence Equipped with the previous two results, it is easy to see that for all ρ[0,π/2)Image, there exists p satisfying 0<p1Image such that

  1. –  If tn=cos(ρ)nImage, then Var[˜mn]Var[˜x]1np1npcos6(ρ)Image, because cos(ρ)1Image.
  2. –  If tn=1ncos3(ρ)Image, then Var[˜mn]Var[˜x]1ncos6(ρ)1npcos6(ρ)Image, because p1Image.

These two pieces together complete the proof of convergence. □

The inequality in Theorem 7.3 implies that as nImage, for any ρ[0,π/2)Image, the variance of iFME estimates in the projection space tends to zero. Besides, when ρ approaches π/2Image, the corresponding power of n and cos(ρ)Image become very small, hence the rate of convergence gets slower. Note that instead of the weighting scheme used here (i.e., in the spirit of incremental mean in Euclidean space), we can choose a different weighting scheme that is intrinsic to the manifold (i.e., as a function of curvature) to speed up the convergence rate.

Now we will show that our proposed recursive FM estimator has a linear convergence rate.

Theorem 7.4

Convergence rate

Let x1,,xNImage be the samples on S(k)Image drawn from any distribution. Then algorithm (7.2) has a linear convergence rate.

Proof

Let x1,,xNImage be the samples drawn from a distribution SkImage. Let m be the FM of {xi}Image. Then, using the triangle inequality, we have

d(mk,m)d(mk1,mk)+d(mk1,m)=1kd(mk1,xk)+d(mk1,m)1k(d(mk1,m)+d(xk,m))+d(mk1,m).

Image

Hence

d(mk,m)d(mk1,m)(1+1k+d(xk,m)kd(mk1,m)).

Image

Now, since d(mk1,m)Image and d(xk,m)Image are finite as {xi}Image are within a geodesic ball of radius <π/2Image, as kImage, d(mk,m)d(mk1,m)1Image. But the equality holds only if m lies on the geodesic between mk1Image and xkImage. Let l<Image, and let m lie on the geodesic between mk1Image and xkImage for some k>lImage. As m is fixed, using induction, we can easily show that m cannot lie on the same geodesic for all k>lImage. Hence d(mk,m)d(mk1,m)<1Image, and the convergence rate is linear. □

7.4 Experimental results

We now evaluate the effectiveness of the iFME algorithm, compared to the nonincremental counterpart FME, for computing the FM of a finite set of samples on the sphere (northern hemisphere not including the equator). For FME, we used a gradient descent technique to minimize the sum of squared geodesic distances cost function. We report the results for samples drawn from a Log-Normal distribution (with mean at the north pole and the variance set to 0.2) on the upper hemisphere. A set of random samples are drawn from the distribution and incrementally input to both iFME and FME algorithms. The computation time needed for each method to compute the sample FM and the error were recorded for each new sample incrementally introduced. In all the experiments the computation time is reported for execution on a 3.3 GHz desktop with a quadcore Intel i7 processor and 24 GB RAM. The error is defined by the geodesic distance between the estimated mean (using either iFME or FME) and the expectation of the input distribution. Because of the randomness in generating the samples, we repeated this experiment 100 times for each case, and the mean execution time and the error for each method are shown.

The performances of iFME and FME are evaluated with respect to the execution time and error and are illustrated in Fig. 7.3. From these plots we can clearly see that iFME performs almost equally good in terms of error/accuracy but takes comparatively very less execution time.

Image
Figure 7.3 Time and error comparisons of iFME and FME.

7.5 Application to the classification of movement disorders

In this section we use an exact-PGA (exact principal geodesic analysis) algorithm presented in [44], which is applicable to data residing on a hypersphere. We will call this the ePGA algorithm. We will also use the tangent PCA (tPCA) algorithm presented in [45] for diffusion tensor fields as a comparison. This algorithm consists of (1) computing the FM of the input data, (2) projecting each data point to the tangent space at the FM using the Riemannian log-map, (3) performing standard PCA in the tangent plane, and (4) projecting the result (principal vectors) back to the manifold using the Riemannian exponential map.

In the ePGA algorithm we will use iFME to compute the FM and make use of the parallel transport operation on the hypersphere. For completeness, we now present the ePGA algorithm from [44] and used here.

Image
Algorithm 7.1 The ePGA algorithm on manifold MImage.

The projection algorithm/step in the above algorithm has an analytic expression for manifolds with constant sectional curvature, that is, for hypersphere and hyperbolic space. Here we will present the projection algorithm.

Image
Algorithm 7.2 Algorithm for projecting the data points to a codimension one submanifold.

Note that the parallel transport operation on the hypersphere can be expressed in analytic form. The formula for parallel transporting uTnS(k)Image from n to m is given by

w=Γnm(p)=(pv(vtpv2))+vtpv2(n(sin(v)v)+vcos(v)),

Image

where v=LognmImage. We refer the reader to [44] for details of the ePGA algorithm.

The dataset for classification consists of high angular resolution diffusion image (HARDI) scans from (1) healthy controls, patients with (2) Parkinson's disease (PD), and (3) essential tremor (ET). We aim to automatically discriminate between these three classes using features derived from the HARDI data. This dataset consists of 25 controls, 24 PD, and 15 ET images. The HARDI data were acquired using a 3T Phillips MR scanner with the following parameters: TR=7748Image ms, TE=86Image ms, b-values: 0, 1000smm2Image, 64 gradient directions, and voxel size = 2×2×2mm3Image.

Authors in [46] employed DTI-based analysis using scalar-valued features to address the problem of movement disorder classification. Later in [47] a PGA-based classification algorithm was proposed using Cauchy deformation tensors (computed from a nonrigid registration of patient scans to a HARDI control atlas), which are SPD matrices. In the next subsection we develop classification method based on (1) Ensemble Average Propagators (EAPs) derived from HARDI data within an ROI and (2) shapes of the ROI derived from the input population. Using a square root density parameterization [48], both features can be mapped to points on an unit Hilbert sphere, where the proposed iFME in conjunction with the ePGA method is applicable.

Classification results using the ensemble average propagator as features: To capture the full diffusional information, we chose to use the ensemble average propagator (EAP) at each voxel as our feature in the classification. We compute the EAPs using the method described in [49] and use the square root density parameterization of each EAP. This way the full diffusion information at each voxel is represented as a point on the unit Hilbert sphere.

We now present the classification algorithm, which is a combination of ePGA-based reduced representation and a nearest-neighbor classifier. The input to the ePGA algorithm are EAP features in this case. The input HARDI data are first rigidly aligned to the atlas computed from the control (normal) group, then EAPs are computed in a 3-D region of interest (ROI) in the midbrain. Finally, the EAP field extracted from each ROI image is identified with a point on the product manifold (the number of elements in the product is equal to the number of voxels in the ROI) of unit Hilbert spheres. This is in spirit similar to the case of the product manifold formalism in [45,47].

A set of 10 control, 10 PD, and 5 ET images are randomly picked as the test set, and the rest of the images are used for training. Also, classification is performed using ePGA, tPCA, and the standard PCA and is repeated 300 times to report the average accuracy. The results using EAP features are summarized in Table 7.2. It is evident that the accuracy of ePGA is better than that of the tPCA, whereas both methods are considerably more accurate than the standard PCA, as they account for the nonlinear geometry of the sphere.

Classification results using the shape features: In this section we evaluated the ePGA algorithm based on shape of the Substantia Nigra region in the given brain images for the task of movement disorder classification. We first collected random samples (point) on the boundary of each 3-D shape and applied the Schrödinger distance transform (SDT) technique in [50] to represent each shape as a point on the unit hypersphere. The size of the ROI for the 3-D shape of interest was set to 28×28×15Image, and the resulting samples lie on an S11759Image manifold. Then we used ePGA for classification. The results reported in Table 7.1 depict the accuracy gained in classification when using ePGA compared to tPCA. Evidently, ePGA yields a better classification accuracy.

Table 7.1

Classification results from ePGA, tPCA, and PCA.

Results using shape features
Control vs. PDControl vs. ETPD vs. ET
ePGAtPCAPCAePGAtPCAPCAePGAtPCAPCA
Accuracy94.593.067.391.490.175.788.187.664.6
Sensitivity92.391.052.087.586.280.184.782.458.4
Specificity96.895.082.796.394.171.394.292.870.8

Image

Table 7.2

Classification results from ePGA, tPCA, and PCA.

Results using EAP features
Control vs. PDControl vs. ETPD vs. ET
ePGAtPCAPCAePGAtPCAPCAePGAtPCAPCA
Accuracy93.793.559.892.891.370.292.490.966.0
Sensitivity93.191.848.390.789.779.886.284.756.3
Specificity96.995.271.393.192.960.698.797.175.7

Image

7.6 Riemannian geometry of the special orthogonal group

The set of all n×nImage orthogonal matrices is denoted by O(n)Image, that is, O(n)={XRn×n|XTX=In}Image. The set of orthogonal matrices with determinant 1, denoted by the special orthogonal group so(n)Image, forms a compact subset of O(n)Image. As so(n)Image is a compact Riemannian manifold, by the Hopf–Rinow theorem it is also a geodesically complete manifold [51]. Its geometry is well understood, and we recall a few relevant concepts here and refer the reader to [51] for details. The manifold so(n)Image has a Lie group structure, and the corresponding Lie algebra is defined as so(n)={WRn×n|WT=W}Image. In other words, so(n)Image (the set of Left invariant vector fields with associated Lie bracket) is the set of n×nImage antisymmetric matrices. The Lie bracket [,]Image, operator on so(n)Image, is defined as the commutator, that is, [U,V]=UVVUImage for U,Vso(n)Image. Now we can define a Riemannian metric on so(n)Image as follows: U,VX=trace(UTV)Image for U,VTXso(n)Image, Xso(n)Image. Note that it can be shown that this is a biinvariant Riemannian metric. Under this biinvariant metric, now we define the Riemannian exponential and inverse exponential maps as follows. Let X,Yso(n)Image, UTXso(n)Image. Then the Riemannian inverse exponential map is defined as

LogX(Y)=Xlog(XTY),

Image(7.13)

and the Riemannian exponential map is defined as

ExpX(U)=Xexp(XTU),

Image(7.14)

where exp and log are the matrix exponential and logarithm, respectively. Due of the computational complexity of matrix exponential, we may instead choose to use the Riemannian retraction map as follows. Given Wso(n)Image, the Cayley map is a conformal mapping Cay:so(n)Image so(n)Image defined by Cay(W)=(In+W)(InW)1Image. Using the Cayley map, we can define the Riemannian retraction map as, RetXImage :TXso(n)so(n)Image by RetX(W)=Cay(W)XImage. Using the Riemannian exponential (retraction) and inverse exponential map, we can define the geodesic on so(n)Image as ΓYX(y)=Exp(tLogX(Y))Image.

7.7 Weak consistency of iFME on so(n)Image

In this section we present a detailed proof of convergence of our recursive estimator on so(n)Image. Analogous to our FM estimator on S(k)Image, we first define the FM estimator on so(n)Image and then prove its consistency.

Formally, let X1,,XNImage be a set of N samples on so(n)Image, all of which lie inside a geodesic ball of an appropriate radius such that FM exists and is unique. The iFME estimate MnImage of the FM with the nth given sample XnImage is defined by

M1=X1,

Image(7.15)

Mn=ΓXnMn1(1n),

Image(7.16)

where ΓXnMn1Image is the shortest geodesic path from Mn1Image to XnImage (so(n)Image). In what follows we will show that as the number of given samples N tends to infinity, the iFME estimates will converge to the expectation of the distribution from which the samples are drawn.

Theorem 7.5

Let X1,X2,,XNImage be i.i.d. samples drawn from a probability distribution on so(n)Image. Then the inductive FM estimator (iFME) MNImage of these samples converges to M as NImage, where M is the expectation of the probability distribution.

Proposition 7.1

Any arbitrary element of so(n)Image can be written as the composition of planar rotations in the planes generated by the n standard orthogonal basis vectors of RnImage.

Proof

Let Xso(n)Image. Observe that log(X)so(n)Image. Further, log(X)Image is a skew-symmetric matrix. Let U=log(X)Image. Then UijImage is the angle of rotation in the plane formed by the basis vectors eiImage and ejImage for all i=1,,nImage and j=i+1,,nImage. □

By Proposition 7.1 we can express X as a product of n(n1)/2Image planar rotation matrices. Each planar rotation matrix can be mapped onto S(n1)Image, hence there exists a mapping F:SO(n)Image S(n1)××Sn1n(n1)/2 timesImage. Let us denote this product space of hyperspheres by O(n1,n(n1)2)Image. Then F is a embedding of SO(n)Image in O(n1,n(n1)2)Image. Notice that as this product space has n(n1)/2Image components, we will use two indices (i,j)Image to denote a component. Hence, given XSO(n)Image, the (i,j)Imageth component of F(X)Image has ith and jth entries cos(θij)Image and sin(θij)Image, respectively, and the rest of the n2Image entries are zero. Here θijImage is the planar rotation angle in the plane spanned by eiImage and ejImage. The following propositions allow us to prove that SO(n)Image is a submanifold of O(n1,n(n1)2)Image.

To do that, we will first use the Log-Euclidean metric on SO(n)Image defined as follows: Given X,YSO(n)Image, the metric d(X,Y)=12log(X)log(Y)FImage. Notice that this metric is adjoint invariant, as Zlog(X)ZTZlog(Y)ZTFImage =log(X)log(Y)FImage for some ZSO(n)Image.

The following proposition shows that F is an isometric mapping using the product 2Image arc-length metric on O(n1,n(n1)2)Image.

Proposition 7.2

F is an isometric mapping from SO(n)Image to O(n1,n(n1)2)Image.

Proof

Given X,YSO(n)Image, let F(X)ijImage and F(Y)ijImage be the (i,j)Imageth entries of F(X)Image and F(Y)Image, respectively. Let F(X)ij=(0,0,cos(θXij),0,,0,sin(θXij),0,,0)tImage and F(Y)ij=(0,,0,cos(θYij),0,,0,sin(θYij),0,,0)tImage. Then F(X)ij,F(Y)ijSn1Image, and hence d(F(X)ij,F(Y)ij)=|θXijθYij|Image. Thus the (i,j)Imageth entry of log(X)log(Y)Image is the signed distance d(F(X)ij,F(Y)ij)Image. This concludes the proof. □

It is easy to see that the ΓYXImage from X to Y is given by

ΓYX(t)=exp(log(X)+t(log(Y)log(X))).

Image

In the following propositions we will show that F maps ΓXYImage to the (shortest) geodesic from F(X)Image to F(Y)Image for any X,YSO(n)Image.

Proposition 7.3

The shortest geodesic between x=(0,,0,cos(θ1),0,,0,sin(θ1),0,,0)tImage and y=(0,,0,cos(θ2),0,,0,sin(θ2),0,,0)tImage is given by

Γyx(t)=((0,,0,cos((1t)θ1+tθ2)),0,,0,sin((1t)θ1+tθ2),0,,0)t.

Image

Proof

From spherical geometry we know that given x,ySn1Image, the shortest geodesic is given by

Γyx(t)=1sin(θ)(sin(tθ)x+sin((1t)θ)y),

Image

where θ=d(x,y)Image. Now, given the hypothesis: θ=|θ2θ1|Image, observe that, for t[0,1]Image, Γyx(t)Image is the unique point on Sn1Image such that d(Γyx(t),x)=t|θ2θ1|Image and d(Γyx(t),y)=(1t)|θ2θ1|Image. Substituting

Γyx(t)=((0,,0,cos(tθ1+(1t)θ2)),0,,0,sin(tθ1+(1t)θ2),0,,0)t,

Image

we conclude the proof. □

Theorem 7.6

SO(n)Image is a geodesic submanifold of O(n1,n(n1)2)Image.

Proof

Given X,YSO(n)Image and t[0,1]Image, the shortest geodesic is given by ΓYX(t)=exp(log(X)+t(log(Y)log(X)))Image. Now observe that

F(X)ij=(0,,0,cos(θ1),0,,0,sin(θ1),0,,0)t,

Image

where θ1=log(X)ijImage, and

F(Y)ij=(0,,0,cos(θ2),0,,0,sin(θ2),0,,0)t,

Image

where θ2=log(Y)ijImage. By Proposition 7.3,

log(F(ΓYX(t)))=(0,,0,cos(θ),0,,0,sin(θ),0,,0)t,

Image

where θ=(1y)θ1+tθ2Image. This concludes our proof. □

We are now ready to prove Theorem 7.6.

Proof

(Proof of Theorem 7.6). To show weak consistency of our proposed estimator on {Xi}SO(n)Image, it is sufficient to show the weak consistency of our estimator on {F(Xi)}O(n1,n(n1)2)Image. A proof of the weak consistency of our proposed FM estimator on hypersphere has been shown in Section 7.3. Note that the distance d on O(n1,n(n1)2)Image is given by

d((x1,,xm),(y1,,ym))=d(xi,yi)2,

Image(7.17)

where, on the RHS, d is the distance on Sn1Image, and m=n(n1)/2Image. We can easily show that the FM on O(n1,n(n1)2)Image can be found from the FM of each component. Thus we can trivially extend the proof of the weak consistency of Sn1Image to O(n1,n(n1)2)Image, which in turn proves the weak consistency on SO(n)Image. □

7.8 Experimental results

We now evaluate the effectiveness of the iFME algorithm, compared to the nonincremental counterpart FME, for computing the FM of a finite set of samples synthetically generated on SOnImage. We report the results for samples randomly drawn from a Log-Normal distribution on SO20Image with expectation I20Image (I denotes the identity matrix) and variance 0.25. The computation time needed by each method for computing the sample FM and the error were recorded for each new sample incrementally introduced. The error is defined by the geodesic distance between the estimated mean (using either iFME or the FME) and the expectation of the input distribution. Because of the randomness in generating the samples, we repeated this experiment 100 times for each case, and the mean time consumption and the error for each method are shown.

From the comparison plots in Fig. 7.4 we can see that iFME is very competitive in accuracy and much more efficient in execution time. We also present a comparison plots in Fig. 7.5 depicting the execution time taken by both the algorithms to achieve a prespecified error tolerance (with respect to the expectation of the distribution from which the samples are drawn). This plot suggests that to achieve a prespecified tolerance, iFME needs far less execution time compared to FME. Once again, all the time comparisons were performed on a 3.3 GHz desktop with quadcore Intel i7 processor and 24 GB RAM.

Image
Figure 7.4 Time and error comparisons of iFME and FME.
Image
Figure 7.5 Error tolerance comparison of iFME and FME.

We now present an application to the atlas construction problem. Results on toy images taken from the MPEG-7 database using the proposed recursive FM computation technique are shown. Note that to compute the atlas, it is required to align the image data whose atlas we seek prior to computing the average. In this context we use the computationally efficient technique proposed in [52]. This technique involves picking an arbitrary reference image data set and aligning all the data to this reference. Then the alignment transformations between the reference and the rest of the images in the population are averaged. It was shown that this average transformation when applied to the chosen reference yields the atlas. For further details on this technique, we refer the reader to [52].

Two toy images were taken from the MPEG-7 database, and for each of these images, we generated four rotated images (highlighted in orange (mid gray in print version) in Fig. 7.6) and then computed the “mean image” (or atlas) of these rotated images (highlighted in red (dark gray in print version) in Fig. 7.6). The results clearly suggest that our approach gives an efficient way to compute atlas for images with planar rotations.

Image
Figure 7.6 Mean images on MPEG-7 toy data.

7.9 Conclusions

In this chapter we presented a recursive estimator for computing the Riemannian barycenter a.k.a. the Fréchet mean of manifold-valued data sets. Specifically, we developed the theory for two well-known and commonly encountered Riemannian manifolds, namely, the hypersphere SnImage and the special orthogonal group SO(n)Image. The common approach to estimating the FM from a set of data samples involves the application of Riemannian gradient descent to the Fréchet functional. This approach is not well suited for situations where data are acquired incrementally, that is, in an online fashion. In an online setting the Riemannian gradient descent proves to be inefficient both computationally and storagewise, since it requires computation of the Fréchet functional from scratch and thus requires the storage of all past data. In this chapter we presented a recursive FM estimator that does not require any optimization and achieves the computation of the FM estimate in a single pass over the data. We proved the weak consistency of the estimator for SnImage and SO(n)Image. Further, we experimentally demonstrated that the estimator yields comparable accuracy to the Riemannian gradient descent algorithm. Several synthetic data and real-data experiments were presented showcasing the performance of the estimator in comparison to the Riemannian gradient descent technique for the FM computation and PG for the classification experiments with applications in neuroimaging. Our future work will focus on developing an FM estimator that uses intrinsic weights in the incremental updates, which will likely improve the convergence rate of the estimator.

References

1. M. Moakher, P.G. Batchelor, Symmetric positive-definite matrices: from geometry to applications and visualization, Visualization and Processing of Tensor Fields 2006:285–298.

2. X. Pennec, P. Fillard, N. Ayache, A Riemannian framework for tensor computing, International Journal of Computer Vision 2006;66(1):41–66.

3. P. Turaga, A. Veeraraghavan, R. Chellappa, Statistical analysis on Stiefel and Grassmann manifolds with applications in computer vision, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2008:1–8.

4. S. Hauberg, A. Feragen, M.J. Black, Grassmann averages for scalable robust PCA, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014:3810–3817.

5. R. Chakraborty, B.C. Vemuri, Recursive Fréchet mean computation on the Grassmannian and its applications to computer vision, IEEE International Conference on Computer Vision (ICCV). 2015:4229–4237.

6. R. Chakraborty, S. Hauberg, B.C. Vemuri, Intrinsic Grassmann averages for online linear and robust subspace learning, arXiv preprint arXiv:1702.01005; 2017.

7. C.R. Goodall, K.V. Mardia, Projective shape analysis, Journal of Computational and Graphical Statistics 1999;8(2):143–168.

8. V. Patrangenaru, K.V. Mardia, Affine shape analysis and image analysis, 22nd Leeds Annual Statistics Research Workshop. 2003:57–62.

9. H. Hendriks, Z. Landsman, Mean location and sample mean location on manifolds: asymptotics, tests, confidence regions, Journal of Multivariate Analysis 1998;67(2):227–243.

10. Y. Chikuse, Asymptotic expansions for distributions of the large sample matrix resultant and related statistics on the Stiefel manifold, Journal of Multivariate Analysis 1991;39(2):270–283.

11. R. Chakraborty, B.C. Vemuri, et al., Statistics on the Stiefel manifold: theory and applications, The Annals of Statistics 2019;47(1):415–438.

12. K.V. Mardia, P.E. Jupp, Directional Statistics, vol. 494. John Wiley & Sons; 2009.

13. A. Srivastava, I. Jermyn, S. Joshi, Riemannian analysis of probability density functions with applications in vision, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). 2007:1–8.

14. D.S. Tuch, T.G. Reese, et al., Diffusion MRI of complex neural architecture, Neuron 2003;40(5):885–895.

15. R. Hartley, J. Trumpf, et al., Rotation averaging, International Journal of Computer Vision 2013;103(3):267–305.

16. M. Fréchet, Les éléments aléatoires de nature quelconque dans un espace distancié. Annales de l'institut Henri Poincaré. Presses Universitaires de France; 1948;vol. 10:215–310.

17. H. Karcher, Riemannian center of mass and mollifier smoothing, Communications on Pure and Applied Mathematics 1977;30(5):509–541.

18. B. Afsari, Riemannian LpImage center of mass: existence, uniqueness, and convexity, Proceedings of the American Mathematical Society 2011;139(2):655–673.

19. M. Arnaudon, F. Barbaresco, L. Yang, Riemannian medians and means with applications to radar signal processing, IEEE Journal of Selected Topics in Signal Processing 2013;7(4):595–604.

20. M. Charfi, Z. Chebbi, M. Moakher, B.C. Vemuri, Bhattacharyya median of symmetric positive-definite matrices and application to the denoising of diffusion-tensor fields, Biomedical Imaging (ISBI), 2013 IEEE 10th International Symposium on. IEEE; 2013:1227–1230.

21. A. Bhattacharya, R. Bhattacharya, Statistics on Riemannian manifolds: asymptotic distribution and curvature, Proceedings of the American Mathematical Society 2008;136(8):2959–2967.

22. D. Groisser, Newton's method, zeroes of vector fields, and the Riemannian center of mass, Advances in Applied Mathematics 2004;33(1):95–135.

23. X. Pennec, Intrinsic statistics on Riemannian manifolds: basic tools for geometric measurements, Journal of Mathematical Imaging and Vision 2006;25(1):127–154.

24. M. Moakher, A differential geometric approach to the geometric mean of symmetric positive-definite matrices, SIAM Journal of Matrix Analysis (SIMAX) 2005;26(3):735–747.

25. R. Bhatia, Matrix Analysis, vol. 169. Springer Science & Business Media; 2013.

26. C.R. Rao, Differential metrics in probability spaces, Differential Geometry in Statistical Inference 1987;10:217–240.

27. P. Fletcher, S. Joshi, Riemannian geometry for the statistical analysis of diffusion tensor data, Signal Processing 2007;87(2):250–262.

28. T. Ando, C.K. Li, R. Mathias, Geometric means, Linear Algebra and Its Applications 2004;385:305–334.

29. K.T. Sturm, Probability measures on metric spaces of nonpositive, Heat Kernels and Analysis on Manifolds, Graphs, and Metric Spaces: Lecture Notes From a Quarter Program on Heat Kernels, Random Walks, and Analysis on Manifolds and Graphs, vol. 338. April 16–July 13, 2002. Paris, France: Emile Borel Centre of the Henri Poincaré Institute; 2003:357.

30. Y. Lim, M. Pálfia, Weighted inductive means, Linear Algebra and Applications 2014;453:59–832.

31. J. Ho, G. Cheng, et al., Recursive Karcher expectation estimators and geometric law of large numbers, Proc. of the Conf. on Artificial Intelligence and Statistics (AISTATS). 2013:325–332.

32. H. Salehian, R. Chakraborty, E. Ofori, D. Vaillancourt, B.C. Vemuri, An efficient recursive estimator of the Fréchet mean on a hypersphere with applications to medical image analysis, Mathematical Foundations of Computational Anatomy 2015.

33. A. Feragen, S. Hauberg, M. Nielsen, F. Lauze, Means in spaces of tree-like shapes, Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE; 2011:736–746.

34. M. Arnaudon, C. Dombry, A. Phan, L. Yang, Stochastic algorithms for computing means of probability measures, Stochastic Processes and Their Applications 2012;122(4):1437–1455.

35. H.E. Cetingul, B. Afsari, et al., Group action induced averaging for HARDI processing, IEEE Intl. Symp. on Biomedical Imaging. 2012:1389–1392.

36. A.P. Dawid, et al., Further comments on some comments on a paper by Bradley Efron, The Annals of Statistics 1977;5(6):1249.

37. J. Burbea, Informative Geometry of Probability Spaces. [Tech. Rep.] Pittsburgh Univ. PA Center for Multivariate Analysis; 1984.

38. D.C. Brody, L.P. Hughston, Statistical Geometry in Quantum Mechanics, Proceedings - Royal Society. Mathematical, Physical and Engineering Sciences 1998;454:2445–2475.

39. W.S. Kendall, Probability, convexity, and harmonic maps with small image I: uniqueness and fine existence, Proceedings of the London Mathematical Society 1990;3(2):371–406.

40. B. Iversen, Hyperbolic Geometry, vol. 25. Cambridge University Press; 1992.

41. G. Amarasinghe, On the standard lengths of angle bisectors and the angle bisector theorem, Global Journal of Advanced Research on Classical and Modern Geometries 2012;1(1).

42. B. Afsari, R. Tron, et al., On the convergence of gradient descent for finding the Riemannian center of mass, SIAM Journal on Control and Optimization 2013;51(3):2230–22605.

43. N.J. Sloane, et al., The On-line Encyclopedia of Integer Sequences. 2003.

44. R. Chakraborty, D. Seo, B.C. Vemuri, An efficient exact-PGA algorithm for constant curvature manifolds, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:3976–3984.

45. Y. Xie, B.C. Vemuri, J. Ho, Statistical analysis of tensor fields, Medical Image Computing and Computer Aided Intervention (MICCAI). Springer; 2010:682–689.

46. D. Vaillancourt, M. Spraker, et al., High-resolution diffusion tensor imaging in the substantia nigra of de novo Parkinson disease, Neurology 2009:1378–1384.

47. H. Salehian, D. Vaillancourt, B.C. Vemuri, IPGA: Incremental Principal Geodesic Analysis With Applications to Movement Disorder Classification. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). NIH Public Access; 2014;vol. 17:765.

48. J. Sun, Y. Xie, et al., Dictionary learning on the manifold of square root densities and application to reconstruction of diffusion propagator fields, Intl. Conf. on Info. Processing in Medical Imaging (IPMI). 2013:619–631.

49. B. Jian, B.C. Vemuri, A unified computational framework for deconvolution to reconstruct multiple fibers from DWMRI, IEEE Transactions on Medical Imaging (TMI) 2007;26:1464–1471 10.1109/TMI.2007.907552.

50. Y. Deng, A. Rangarajan, S. Eisenschenk, B.C. Vemuri, A Riemannian framework for matching point clouds represented by the Schrödinger distance transform, IEEE Conference on Computer Vision and Pattern Recognition. 2014:3756–3761.

51. S. Helgason, Differential Geometry and Symmetric Spaces, vol. 12. Academic Press; 1962.

52. R. Chakraborty, M. Banerjee, D. Seo, S. Turner, D. Fuller, J. Forder, et al., An efficient recursive algorithm for atlas construction, Mathematical Foundations of Computational Anatomy 2015.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.22.107