Old versus new ML

The typical flow that an ML engineer might follow to develop a prediction model is as follows:

  1. Gather data
  2. Extract relevant features from the data
  3. Choose an ML architecture (CNN, ANN, SVM, decision trees, and so on)
  4. Train the model
  5. Evaluate the model and repeat steps 3 to 5 until they find a satisfying solution
  6. Test the model in the field

As mentioned in the previous section, the idea of ML is to have an algorithm that is flexible enough to learn the underlying process behind the data. This being said, many classic methods of ML are not strong enough to learn directly from data; they need to somehow prepare the data before using those algorithms.

We briefly mentioned it before, but this process of preparing the data is often called feature extraction, where some specialist filters out all the details of the data that we believe are relevant to its underlying process. This process makes the classification problems easier for the selected classifier as it doesn’t have to work with irrelevant variables in the data that it might otherwise see as important.

The single coolest feature that new deep learning methods of ML have is they don't need (or need less of) the feature extraction phase. Instead, using large enough datasets, the model itself is capable of learning what are the best features to represent the data, directly from the data itself! The examples of these new methods are as follows:

  • Deep CNNs
  • Deep AutoEncoders
  • Generative Adversarial Networks (GANs)

All these methods are a part of the deep learning process where vast amounts of data are exposed to multilayer neural networks. However, the benefits of these new methods come at a cost. All these new algorithms require much more computing resources (CPU and GPU) and could take much longer to train than traditional methods.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.165.115