Feature selection algorithm

In this real-world case of predicting the failure of banks, we have a high number of variables or financial ratios to train a classifier, so we would expect to obtain a great predictive model. With this in mind, why would we want to select alternate variables and reduce their number?

Well, in some cases, increasing the dimensionality of the problem by adding new features could reduce the performance of our model. This is called the curse of dimensionality problem.

According to this problem, the fact of adding more features or increasing the dimensionality of our feature space will require collecting more data. In this sense, the new observations we need to collect have to grow exponentially quickly to maintain the learning process and to avoid overfitting.

This problem is commonly observed in cases in which the ratio between the number of variables and the observations in our data is not very high.

Feature selection is also useful for identifying and removing unneeded, irrelevant, and redundant variables from data and for reducing the complexity of the model.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.99.174