Locating regions of high density via DBSCAN

Although we can't cover the vast number of different clustering algorithms in this chapter, let's at least introduce one more approach to clustering: Density-based Spatial Clustering of Applications with Noise (DBSCAN). The notion of density in DBSCAN is defined as the number of points within a specified radius Locating regions of high density via DBSCAN.

In DBSCAN, a special label is assigned to each sample (point) using the following criteria:

  • A point is considered as core point if at least a specified number (MinPts) of neighboring points fall within the specified radius Locating regions of high density via DBSCAN
  • A border point is a point that has fewer neighbors than MinPts within Locating regions of high density via DBSCAN, but lies within the Locating regions of high density via DBSCAN radius of a core point
  • All other points that are neither core nor border points are considered as noise points

After labeling the points as core, border, or noise points, the DBSCAN algorithm can be summarized in two simple steps:

  1. Form a separate cluster for each core point or a connected group of core points (core points are connected if they are no farther away than Locating regions of high density via DBSCAN).
  2. Assign each border point to the cluster of its corresponding core point.

To get a better understanding of what the result of DBSCAN can look like before jumping to the implementation, let's summarize what you have learned about core points, border points, and noise points in the following figure:

Locating regions of high density via DBSCAN

One of the main advantages of using DBSCAN is that it does not assume that the clusters have a spherical shape as in k-means. Furthermore, DBSCAN is different from k-means and hierarchical clustering in that it doesn't necessarily assign each point to a cluster but is capable of removing noise points.

For a more illustrative example, let's create a new dataset of half-moon-shaped structures to compare k-means clustering, hierarchical clustering, and DBSCAN:

>>> from sklearn.datasets import make_moons
>>> X, y = make_moons(n_samples=200, 
...                   noise=0.05, 
...                   random_state=0)
>>> plt.scatter(X[:,0], X[:,1])
>>> plt.show()

As we can see in the resulting plot, there are two visible, half-moon-shaped groups consisting of 100 sample points each:

Locating regions of high density via DBSCAN

We will start by using the k-means algorithm and complete linkage clustering to see whether one of those previously discussed clustering algorithms can successfully identify the half-moon shapes as separate clusters. The code is as follows:

>>> f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8,3))
>>> km = KMeans(n_clusters=2, 
...             random_state=0)
>>> y_km = km.fit_predict(X)
>>> ax1.scatter(X[y_km==0,0], 
...             X[y_km==0,1], 
...             c='lightblue', 
...             marker='o', 
...             s=40, 
...             label='cluster 1')
>>> ax1.scatter(X[y_km==1,0], 
...             X[y_km==1,1], 
...             c='red', 
...             marker='s', 
...             s=40, 
...             label='cluster 2')
>>> ax1.set_title('K-means clustering')
>>> ac = AgglomerativeClustering(n_clusters=2, 
...                              affinity='euclidean',
...                              linkage='complete')
>>> y_ac = ac.fit_predict(X)
>>> ax2.scatter(X[y_ac==0,0], 
...             X[y_ac==0,1], 
...             c='lightblue', 
...             marker='o', 
...             s=40, 
...             label='cluster 1')
>>> ax2.scatter(X[y_ac==1,0], 
...             X[y_ac==1,1], 
...             c='red', 
...             marker='s', 
...             s=40, 
...             label='cluster 2')
>>> ax2.set_title('Agglomerative clustering')
>>> plt.legend()
>>> plt.show()

Based on the visualized clustering results, we can see that the k-means algorithm is unable to separate the two clusters, and the hierarchical clustering algorithm was challenged by those complex shapes:

Locating regions of high density via DBSCAN

Finally, let's try the DBSCAN algorithm on this dataset to see if it can find the two half-moon-shaped clusters using a density-based approach:

>>> from sklearn.cluster import DBSCAN
>>> db = DBSCAN(eps=0.2, 
...             min_samples=5, 
...             metric='euclidean')
>>> y_db = db.fit_predict(X)
>>> plt.scatter(X[y_db==0,0], 
...             X[y_db==0,1], 
...             c='lightblue', 
...             marker='o', 
...             s=40, 
...             label='cluster 1')
>>> plt.scatter(X[y_db==1,0], 
...             X[y_db==1,1], 
...             c='red', 
...             marker='s', 
...             s=40, 
...             label='cluster 2')
>>> plt.legend()
>>> plt.show()

The DBSCAN algorithm can successfully detect the half-moon shapes, which highlights one of the strengths of DBSCAN (clustering data of arbitrary shapes)

Locating regions of high density via DBSCAN

However, we should also note some of the disadvantages of DBSCAN. With an increasing number of features in our dataset—given a fixed size training set—the negative effect of the curse of dimensionality increases. This is especially a problem if we are using the Euclidean distance metric. However, the problem of the curse of dimensionality is not unique to DBSCAN; it also affects other clustering algorithms that use the Euclidean distance metric, for example, the k-means and hierarchical clustering algorithms. In addition, we have two hyperparameters in DBSCAN (MinPts and Locating regions of high density via DBSCAN) that need to be optimized to yield good clustering results. Finding a good combination of MinPts and Locating regions of high density via DBSCAN can be problematic if the density differences in the dataset are relatively large.

Note

So far, we saw three of the most fundamental categories of clustering algorithms: prototype-based clustering with k-means, agglomerative hierarchical clustering, and density-based clustering via DBSCAN. However, I also want to mention a fourth class of more advanced clustering algorithms that we have not covered in this chapter: graph-based clustering. Probably the most prominent members of the graph-based clustering family are spectral clustering algorithms. Although there are many different implementations of spectral clustering, they all have in common that they use the eigenvectors of a similarity matrix to derive the cluster relationships. Since spectral clustering is beyond the scope of this book, you can read the excellent tutorial by Ulrike von Luxburg to learn more about this topic (U. Von Luxburg. A Tutorial on Spectral Clustering. Statistics and computing, 17(4):395–416, 2007). It is freely available from arXiv at http://arxiv.org/pdf/0711.0189v1.pdf.

Note that, in practice, it is not always obvious which algorithm will perform best on a given dataset, especially if the data comes in multiple dimensions that make it hard or impossible to visualize. Furthermore, it is important to emphasize that a successful clustering does not only depend on the algorithm and its hyperparameters. Rather, the choice of an appropriate distance metric and the use of domain knowledge that can help guide the experimental setup can be even more important.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.129.253