Getting ready

PCA reduces the n-dimensional input data to r-dimensional input data, where r<n. In simpler terms, PCA involves translating the origin and performing rotation of the axis such that one of the axes (the principal axis) has the least variance with data points. A reduced dimension dataset is obtained from the original dataset by performing this transformation and then dropping (removing) the orthogonal axes with high variance. Here, we employ the SVD method for PCA dimensionality reduction. Consider X, the n-dimensional data, with p points Xp,n. Any real (p×n) matrix can be decomposed as follows:

X = U ∑ VT

Here, U and V are orthonormal matrices (that is, U.UT = VT.V= 1) of size p × n and n × n respectively. is a diagonal matrix of size n × n. Next, slice the matrix to r columns resulting in r; using U and V, we find the reduced dimension data points Yr:

Yr = U∑r

The code presented here has been adapted from this GitHub link:
https://github.com/eliorc/Medium/blob/master/PCA-tSNE-AE.ipynb

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.152.58