There's more...

The net result of using PCA in this recipe is that the original search space of 14 dimensions (the same as saying14 features) is reduced to 4 dimensions that explain almost all the variations in the original dataset.

PCA is not purely a ML concept and has been in use in finance for many years prior to the ML movement. At its core, PCA uses an orthogonal transformation (each component is perpendicular to the other component) to map the original features (apparent dimensions) to a set of newly derived dimensions so that most of the redundant and co-linear attributes are removed. The derived (actual latent dimension) components are linear combinations of the original attributes.

While it is easy to program PCA from scratch using RDD, the best way to learn it is to try to do PCA with a neuron network implementation and look at the intermediate result. You can do this in Café (on Spark), or just Torch, to see that it is a straight linear transformation despite the mystery surrounding it. At its core, PCA is a straight exercise in linear algebra regardless of whether you use the covariance matrix or SVD for decomposition.

Spark provides examples for PCA via source code on GitHub under both the dimensionality reduction and feature extraction sections.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.22.161