The gradient descent and VC Dimension theories

Gradient descent and VC Dimension are two fundamental theories in machine learning. In general, gradient descent gives a structured approach to finding the optimal co-efficients of a function. The hypothesis space of a function can be large and with gradient descent, the algorithm tries to find a minimum (a minima) where the cost function (for example, the squared sum of errors) is the lowest.

VC Dimension provides an upper bound on the maximum number of points that can be classified in a system. It is in essence the measure of the richness of a function and provides an assessment of what the limits of a hypothesis are in a structured way. The number of points that can be exactly classified by a function or hypothesis is known as the VC Dimension of the hypothesis. For example, a linear boundary can accurately classify 2 or 3 points but not 4. Hence, the VC Dimension of this 2-dimensional space would be 3.

VC Dimension, like many other topics in computational learning theory, is both complex and interesting. It is a lesser known (and discussed) topic, but one that has a profound implication as it attempts to answer questions about what the limits of learning can be.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.227.9