# Principal Component Analysis

Principal component analysis (PCA) is widely used for constructing a subspace that approximates a distribution of training data and has a strong relationship with the SVD mentioned above. Let a set of training data be denoted by *D **= *{*1, *x*_{2}*,..., x _{M}*} where

*x*are D-vectors. Let

_{i}.i =*x*denote the mean of

*xi .i =* 1,2 , ..., *M)* and let an *D x D* covariance matrix be denoted by ^_{emp}, where

It is clear that ?_{cov} is a symmetric real matrix. In many cases, the intrinsic dimension of the distribution of the training data is lower than the dimension of the datum, *D*, and the PCA is used to obtain a low-dimensional subset that approximates the distribution.

In the PCA, a set of *D* pairs of an eigenvalues and an eigenvector is computed from the covariance matrix of the training data, ?_{cov}. Let *X _{i} *

*2*

*R*

*(*

*i =*1,2 , ..., D) denote the eigenvalues and let

*w*1,2 , ...,

_{i}.i =*D)*denote the eigenvectors. It is known that, given a symmetric real matrix, ^

_{cov}, there exist

*D*pairs of

*.X*that satisfy the following equation:

_{i}, w_{i})

and that a set of the eigenvectors, *{w _{i}* |

*i =*1,2 , ..., D}, is an orthonormal basis of a

*D*dimensional space: || w

_{i}k = 1 and

*wf wj =*0 if

*i ф j*

*.*Assume that the eigenvalues are in decreasing order,

*X*

*>*

_{1}*X*

*>•••>*

_{2}*X*

_{D}*,*and let a set of the

*K*largest

*(*

*K*< D) eigenvalues be denoted by

*W*k = {w,-|1 <

*i < K}*

*.*

When approximating the training data using a subspace, *П*, every training datum, *xi*, is projected to the subspace, and the original data are approximated the projected data. Let *x _{i}* denote the projected data, and let the approximation error,

*Е.П)*

*,*be defined as follows:

Then, under a condition that the dimension of a subspace is fixed to *K,* the subspace that minimizes *Е(П)* is span(^K), and the unit vectors, *w _{i} (i =* 1,2, ...,K) that span the subspace are called the

*principal components*of the distribution of the training data. The projected data,

*x*can be represented as

_{i},

where *z _{i} =* [z

_{1},

*z*]

_{2},..., z_{K}^{r}is the coordinate of

*x*described based on the basis,

_{i}*S*fw

_{K}=_{1},

*w*}, and can be computed as follows:

_{2},..., w_{K}

where *W _{K}* is

*D x K*matrix such that Substituting (2.63) to (2.62) results in

When the training data follow a Gaussian distribution, then each eigenvalue indicates the variance of the distribution along the corresponding eigenvector. Let a D-dimensional Gaussian distribution be denoted by N(x|x, X'_{cov}), where the D- vector, *x*, denotes the mean and the *D x D* matrix, ^_{cov}, denotes the covariance.

From (2.60), the following equation about the covariance matrix holds:

where

and

Multiplying both sides by *W ^{T} = W~^{l}* from the right, one obtains

and

where

Substituting (2.71) to (2.66) results in

Here, let у *= [*y_{1} ,*y*_{2}*,...,y _{D}]^{T} = Wx.* Because

*{wji =*1,2,...,D} is an orthonormal basis,

*W*is a rotation matrix or a reflection matrix. Substituting

*у = Wx*into Eq. (2.73) results in

where у = *WX*. Because Л^{-1} is a diagonal matrix, the result is

The D-dimensional Gaussian distribution is a product of *D* one-dimensional Gaussian distributions of y_{;}, of which the mean is *y _{;}* and the variance is A

_{;}. Here, it should be remembered that

*y*

*= W*

*x*and that

*y*is the coordinates of

*x*

represented using the basis, *{**w** _{;}i =* 1,2.....Dg. The one-dimensional Gaussian

distribution, *N(y _{;}y_{;},* A,/, represents the distribution along the eigenvector,

*w*

*, and the corresponding eigenvalue, A*

_{;}_{;}, represents the variance along the direction. The principal components that span the subspace for the data approximation correspond to the eigenvectors along which the distribution has larger variances.

Comparing the SVD of a matrix shown in (2.56) with (2.70), it can be seen that *U* and *V* in (2.56) are identical with *W* in (2.70) and the singular values are identical with the eigenvalues, *a, =* A_{;}.