Semisupervised learning

Between supervised and unsupervised learning, there is a small place for semi-supervised learning. In this case, the ML model usually receives an incomplete training signal. More statistically, the ML model receives a training set with some of the target outputs missing. Semi-supervised learning is more or less assumption based and often uses three kinds of assumption algorithms as the learning algorithm for the unlabeled datasets. The following assumptions are used: smoothness, cluster, and manifold. In other words, semi-supervised learning can furthermore be denoted as weakly supervised or a bootstrapping technique for using the hidden wealth of unlabeled examples to enhance the learning from a small amount of labeled data.

As already mentioned that the acquisition of labeled data for a learning problem often requires a skilled human agent. Therefore, the cost associated with the labeling process thus may render a fully labeled training set infeasible, whereas acquisition of unlabeled data is relatively inexpensive.

For example: to transcribe an audio segment, in determining the 3D structure of a protein or determining whether there is oil at a particular location, expectation minimization and human cognition, and transitive. The In such situations, semi-supervised learning can be of great practical value.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.12.202