Summary

In this chapter, we presented some fundamental clustering algorithms. We started with KNN, which is an instance-based method that restructures the dataset to find the most similar samples given a query point. We discussed three approaches: a naive one, which is also the most expensive in terms of computational complexity, and two strategies based respectively on the construction of a KD Tree and a Ball Tree. These two data structures can dramatically improve performance even when the number of samples is very large.

The next topic was a classic algorithm: K-means, which is a symmetric partitioning strategy, comparable to a Gaussian mixture with variances close to zero, that can solve many real-life problems. We discussed both a vanilla algorithm, which wasn't able to find a valid sub-optimal solution, and an optimized initialization method, called K-means++, which was able to speed up the convergence towards solutions quite close to the global minimum. In the same section, we also presented some evaluation methods that can be employed to assess the performance of a generic clustering algorithm.

We also presented a soft-clustering method called fuzzy C-means, which resembles the structure of a standard K-means, but allows managing membership degrees (analogous to probabilities) that encode the similarity of a sample with all cluster centroids. This kind of approach allows processing the membership vectors in a more complex pipeline, where the output of a clustering process, for example, is fed into a classifier.

One of the most important limitations of K-means and similar algorithms is the symmetric structure of the clusters. This problem can be solved with methods such as spectral clustering, which is a very powerful approach based on the dataset graph and is quite similar to non-linear dimensionality reduction methods. We analyzed an algorithm proposed by Shi and Malik, showing how it can easily separate a non-convex dataset.

In the next chapter, Chapter 8, Ensemble Learning, we're going to discuss some common ensemble learning methods, which are based on the use of a large set of weak classifiers. We focused on their peculiarities, comparing the performances of different ensembles with single strong classifiers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.44.229