The support vector method is a binary classification method based on using a hyperplane to divide objects into classes. The dimensions of the hyperplane are always chosen so that they're less than the dimensions of the original space. In , for example, a hyperplane is an ordinary two-dimensional plane. The distance from the hyperplane to each class should be as short as possible. The vectors that are closest to the separating hyperplane are called support vectors. In practice, cases where the data can be divided by a hyperplane – in other words, linear cases – are quite rare. In this case, all the elements of the training dataset are embedded in the higher dimension space, , using a special mapping. In this case, the mapping is chosen so that in the new space, , the dataset is linearly separable.
One-Class SVM (OCSVM) is an adaptation of the support vector method that focuses on anomaly detection. OCSVM differs from the standard version of SVM in a way that, the resulting optimization problem includes an improvement for determining a small percentage of predetermined anomalous values, which allows this method to be used to detect anomalies. These anomalous values lie between the starting point and the optimal separating hyperplane. All other data belonging to the same class falls on the opposite side of the optimal separating hyperplane.
There's also another type of OCSVM that uses a spherical, instead of a planar (or linear), approach. The algorithm obtains a spherical boundary, in the feature space, around the data. The volume of this hypersphere is minimized to reduce the effect of incorporating outliers in the solution.
OCSVM assigns a label, which is the distance from the test data point to the optimal hyperplane. Positive values in the OCSVM output represent normal behavior (with higher values representing greater normality), while negative values represent anomalous behavior (the lower the value, the more significant the anomaly).