Chapter 5

From Bi-Level Sparse Clustering to Deep Clustering

Zhangyang Wang    Department of Computer Science and Engineering, Texas A&M University, College Station, TX, United States

Abstract

Many clustering methods highly depend on extracted features. We propose a joint optimization framework in terms of both feature extraction and discriminative clustering. We utilize graph regularized sparse codes as the features, and formulate sparse coding as the constraint for clustering. Two cost functions are developed based on entropy-minimization and maximum-margin clustering principles, respectively. They are considered as the objectives to be minimized. Solving such a bi-level optimization mutually reinforces both sparse coding and clustering steps. Experiments on several benchmark datasets verify remarkable performance improvements led by the proposed joint optimization.

While sparse coding-based clustering methods have shown to be successful, their bottlenecks in both efficiency and scalability limit the practical usage. In recent years, deep learning has been proved to be a highly effective, efficient and scalable feature learning tool. We propose to emulate the sparse coding-based clustering pipeline in the context of deep learning, leading to a carefully crafted deep model benefiting from both. A feed-forward network structure, named TAGnet, is constructed based on a graph-regularized sparse coding algorithm. It is then trained with task-specific loss functions from end to end. We discover that connecting deep learning to sparse coding benefits not only the model performance, but also its initialization and interpretation. Moreover, by introducing auxiliary clustering tasks to the intermediate feature hierarchy, we formulate DTAGnet and obtain a further performance boost. Extensive experiments demonstrate that the proposed model gains remarkable margins over several state-of-the-art methods.

Keywords

Sparse coding; Deep learning; Clustering; Manifold

5.1 A Joint Optimization Framework of Sparse Coding and Discriminative Clustering1

5.1.1 Introduction

Clustering aims to divide data into groups of similar objects (clusters), and plays an important role in many real world data mining applications. To learn the hidden patterns of the dataset in an unsupervised way, existing clustering algorithms can be described as either generative or discriminative in nature. Generative clustering algorithms model categories in terms of their geometric properties in feature spaces, or as statistical processes of data. Examples include K-means [1] and Gaussian mixture model (GMM) clustering [2], which assume a parametric form of the underlying category distributions. Rather than modeling categories explicitly, discriminative clustering techniques search for the boundaries or distinctions between categories. With fewer assumptions being made, these methods are powerful and flexible in practice. For example, maximum-margin clustering [35] aims to find the hyperplane that can separate the data from different classes with a maximum margin. Information-theoretic clustering [6,7] minimizes the conditional entropy of all samples. Many recent discriminative clustering methods have achieved very satisfactory performances [5].

Moreover, many clustering methods extract discriminative features from input data, prior to clustering. The principal component analysis (PCA) feature is a common choice but not necessarily discriminative [8]. Kernel-based clustering methods [9] were explored to find implicit feature representations of input data. In [10], the features are selected for optimizing the discriminativity of the used partitioning algorithm, by solving a linear discriminant analysis (LDA) problem. More recently, sparse codes have been shown to be robust to noise and capable of handling high-dimensional data [11]. Furthermore, 1Image-graph [12] builds the graph by reconstructing each data point sparsely and locally with other data. A spectral clustering [13] is followed based on the constructed graph matrix. In [14,15], dictionary learning is combined with the clustering process, which uses Lloyd's-type algorithms that iteratively reassign data to clusters and then optimize the dictionary associated with each cluster. In [8], the authors learned the sparse codes that explicitly preserve the local data manifold structures. Their results indicate that encoding geometrical information will significantly enhance the learning performance. A joint optimization of clustering and manifold structure were further considered in [16]. However, the clustering step in [8,16] is not correlated with the above mentioned discriminative clustering methods.

In this section, we propose to jointly optimize feature extraction and discriminative clustering, in which way they mutually reinforce each other. We focus on sparse codes as the extracted features, and develop our loss functions based on two representative discriminative clustering methods, the entropy-minimization [6] and maximum-margin [3] clustering, respectively. A task-driven bi-level optimization model [17,18] is then built upon the proposed framework. The sparse coding step is formulated as the lower-level constraint, where a graph regularization is enforced to preserve the local manifold structure [8]. The clustering-oriented cost functions are considered as the upper-level objectives to be minimized. Stochastic gradient descent algorithms are developed to solve both bi-level models. Experiments on several popular real datasets verify the noticeable performance improvement led by such a joint optimization framework.

5.1.2 Model Formulation

5.1.2.1 Sparse Coding with Graph Regularization

Sparse codes have proved to be an effective feature for clustering. In [12], the authors suggested that the contribution of one sample to the reconstruction of another sample was a good indicator of similarity between these two samples. Therefore, the reconstruction coefficients (sparse codes) can be used to constitute the similarity graph for spectral clustering. The 1Image-graph performs sparse representation for each data point separately without considering the geometric information and manifold structure of the entire data. Further research shows that the graph regularized sparse representations produce superior results in various clustering and classification tasks [8,19]. In this section, we adopt the graph regularized sparse codes as the features for clustering.

We assume that all the data samples X=[x1,x2,,xn],xiRm×1,i=1,2,,nImage, are encoded into their corresponding sparse codes A=[a1,a2,,an]Image, aiRp×1,i=1,2,,nImage, using a learned dictionary D=[d1,d2,,dp]Image, where diRm×1,i=1,2,,pImage are the learned atoms. Moreover, given a pairwise similarity matrix W, the sparse representations that capture the geometric structure of the data according to the manifold assumption should minimize the following objective: 12i=1nj=1nWij||aiaj||22=Tr(ALAT)Image, where L is the graph Laplacian matrix constructed from W. In this section, W is chosen as the Gaussian kernel, Wij=exp(||xixj||22δ2)Image, where δ is the controlling parameter selected by cross-validation.

The graph regularized sparse codes are obtained by solving the following convex optimization:

A=argminA12||XDA||F2+λi||ai||1+αTr(ALAT)+λ2||A||F2.

Image (5.1)

Note that λ2>0Image is necessary for proving the differentiability of the objective function (see [5.2] in the Appendix). However, setting λ2=0Image proves to work well in practice, and thus the term λ2||A||F2Image will be omitted by default hereinafter (except for the differentiability proof).

Obviously, the effect of sparse codes A largely depends on the quality of dictionary D. Dictionary learning methods, such as K-SVD algorithm [20], are widely used in sparse coding literature. In regard to clustering, the authors of [12,19] constructed the dictionary by directly selecting atoms from data samples. Zheng et al. [8] learned the dictionary that can reconstruct input data well. However, it does not necessarily lead to discriminative features for clustering. In contrast, we will optimize D together with the clustering task.

5.1.2.2 Bi-level Optimization Formulation

The objective cost function for the joint framework can be expressed by the following bi-level optimization:

minD,wC(A,w)s.t.A=argminA12||XDA||F2+λi||ai||1+αTr(ALAT),

Image (5.2)

where C(A,w)Image is a cost function evaluating the loss of clustering. It can be formulated differently based on various clustering principles, two of which will be discussed and solved in Sect. 5.1.3.

Bi-level optimization [21] has been investigated in both theory and application. In [21], the authors proposed a general bi-level sparse coding model for learning dictionaries across coupled signal spaces. Another similar formulation has been studied in [17] for general regression tasks.

5.1.3 Clustering-Oriented Cost Functions

Assuming K clusters, let w=[w1,,wK]Image be the set of parameters of the loss function, where wiImage corresponds to the ith cluster, i=1,2,,KImage. We introduce two forms of loss functions, each of which is derived from a representative discriminative clustering method.

5.1.3.1 Entropy-Minimization Loss

Maximization of the mutual information with respect to parameters of the encoder model effectively defines a discriminative unsupervised optimization framework. The model is parameterized similarly to a conditionally trained classifier, but the cluster allocations are unknown [7]. In [22,6], the authors adopted an information-theoretic framework as an implementation of the low-density separation assumption by minimizing the conditional entropy. By substituting the logistic posterior probability into the minimum conditional entropy principle, the authors got the logistics clustering algorithm, which is equivalent to finding a labeling strategy so that the total entropy of data clustering is minimized.

Since the true cluster label of each xiImage is unknown, we introduce the predicted confidence probability pijImage that sample xiImage belongs to cluster j, i=1,2,,NImage, j=1,2,,KImage, which is set as the likelihood of the multinomial logistic (softmax) regression

pij=p(j|w,ai)=11+ejwTai.

Image (5.3)

The loss function for all data could be defined accordingly in a entropy-like form

C(A,w)=i=1nj=1Kpijlogpij.

Image (5.4)

The predicted cluster label of aiImage is the cluster j where it achieves the largest likelihood probability pijImage. The logistics regression can deal with multiclass problems more easily compared with the support vector machine (SVM). The next important thing we need to study is the differentiability of (5.2).

Theorem 5.1

The objective C(A,w)Image defined in (5.4) is differentiable on D×wImage.

Proof

Denote XXImage, and DDImage. Also let the objective function C(A,w)Image in (5.4) be denoted as C for short. The differentiability of C with respect to w is easy to show, using only the compactness of XImage, as well as the fact that C is twice differentiable.

We will therefore focus on showing that C is differentiable with respect to D, which is more difficult since A, and thus aiImage, is not differentiable everywhere. Without loss of generality, we use a vector a instead of A for simplifying the derivations hereinafter. In some cases, we may equivalently express a as a(D,w)Image in order to emphasize the functional dependence. Based on [5.2] in Appendix, and given a small perturbation ERm×pImage, it follows that

C(a(D+E),w)C(a(D),w)=zCwT(a(D+E)a(D))+O(||E||F2),

Image (5.5)

where the term O(||E||F2)Image is based on the fact that a(D,x)Image is uniformly Lipschitz and X×DImage is compact. It is then possible to show that

C(a(D+E),w)C(a(D),w)=Tr(ETg(a(D+E),w))+O(||E||F2),

Image (5.6)

where g has the form given in Algorithm 5.1. This shows that C is differentiable on DImage. □

Image
Algorithm 5.1 Stochastic gradient descent algorithm for solving (5.2), with C(A,w) as defined in (5.4).

Built on the differentiability proof, we are able to solve (5.1) using a projected first order stochastic gradient descent (SGD) algorithm, whose detailed steps are outlined in Algorithm 5.1. At a high level overview, it consists of an outer stochastic gradient descent loop that incrementally samples the training data. It uses each sample to approximate gradients with respect to the classifier parameter w and the dictionary D, which are then used to update them.

5.1.3.2 Maximum-Margin Loss

Xu et al. [3] proposed maximum margin clustering (MMC), which borrows the idea from the SVM theory. Their experimental results showed that the MMC technique could often obtain more accurate results than conventional clustering methods. Technically, MMC just finds a way to label the samples by running an SVM implicitly, and the SVM margin obtained is maximized over all possible labelings [5]. However, unlike supervised large margin methods, which are usually formulated as convex optimization problems, maximum margin clustering is a nonconvex integer optimization problem, which is much more difficult to solve. Li et al. [23] made several relaxations to the original MMC problem and reformulated it as a semidefinite programming (SDP) problem. The cutting plane maximum margin clustering (CPMMC) algorithm was presented in [5] to solve MMC with a much improved efficiency.

To develop the multiclass max-margin loss of clustering, we refer to the classical multiclass SVM formulation in [24]. Given the sparse code aiImage comprises the features to be clustered, we define the multiclass model as

f(ai)=argmaxj=1,,Kfj(ai)=argmaxj=1,,K(wjTai),

Image (5.7)

where fjImage is the prototype for the jth cluster and wjImage is its corresponding weight vector. The predicted cluster label of aiImage is the cluster of the weight vector that achieves the maximum value wjTaiImage. Let w=[w1,,wK]Image, the multiclass max-margin loss for aiImage, be defined as

C(ai,w)=max(0,1+fri(ai)fyi(ai)),whereyi=argmaxj=1,,Kfj(ai),ri=argmaxj=1,,K,jyifj(ai).

Image (5.8)

Note that different from training a multiclass SVM classier, where yiImage is given as a training label, the clustering scenario requires us to jointly estimate yiImage as a variable. The overall max-margin loss to be minimized is (λ as the coefficient)

C(A,w)=λ2||w||2+i=1nC(ai,w).

Image (5.9)

But to solve (5.8) or (5.9) with respect to the same framework as logistic loss will involve two additional concerns, which need to be handled specifically.

First, the hinge loss of the form (5.8) is not differentiable, with only a subgradient existing. This makes the objective function C(A,w)Image nondifferentiable on D×wImage, and further the analysis in the proof of Theorem 5.1 cannot be applied. We could have used the squared hinge loss or modified Huber loss for a quadratically smoothed loss function [25]. However, as we checked in the experiments, the quadratically smoothed loss is not as good as hinge loss in training time and sparsity. Also, though not theoretically guaranteed, using the subgradient of C(A,w)Image works well in our case.

Second, given that w is fixed, it should be noted that yiImage and riImage are both functions of aiImage. Therefore, calculating the derivative of (5.8) with respect to aiImage would involve expanding both riImage and yiImage, making analysis quite complicated. Instead, we borrow ideas from the regularity of the elastic net solution [17], namely the set of nonzero coefficients of the elastic net solution should not change for small perturbations. Similarly, due to the continuity of the objective, it is assumed that a sufficiently small perturbation over the current aiImage will not change yiImage and riImage. Therefore in each iteration, we could directly precalculate yiImage and riImage using the current w and aiImage and fix them for aiImage updates.2

Given the above two approaches, for a single sample aiImage, if the hinge loss is larger than 0, the derivative of (5.8) with respect to w is

Δij={λwijaiifj=yi,λwij+aiifj=ri,λwijotherwise,

Image (5.10)

where ΔijImage denotes the jth element of the derivative for the sample aiImage. If the hinge loss is less than 0, then Δij=λwijImage. The derivative of (5.8) with respect to aiImage is wriwyiImage if the hinge loss is larger than 0, and 0 otherwise. Note the above deduction can be conducted in a batch mode. It is then similarly solved using a projected SGD algorithm, whose steps are outlined in Algorithm 5.2.

Image
Algorithm 5.2 Stochastic gradient descent algorithm for solving (5.2), with C(A,w) as defined in (5.9).

5.1.4 Experiments

5.1.4.1 Datasets

We conduct our clustering experiments on four popular real datasets, which are summarized in Table 5.1. The ORL face database contains 400 facial images for 40 subjects, and each subject has 10 images of size 32×32Image. The images are taken at different times with varying lighting and facial expressions. The subjects are all in an upright, frontal position with a dark homogeneous background. The MNIST handwritten digit database consists of a total number of 70,000 images, with digits ranging from 0 to 9. The digits are normalized and centered in fixed-size images of 28×28Image. The COIL20 image library contains 1440 images of size 32×32Image, for 20 objects. Each object has 72 images, that were taken 5 degree apart as the object was rotated on a turntable. The CMU-PIE face database contains 68 subjects with 41,368 face images as a whole. For each subject, we have 21 images of size 32×32Image, under different lighting conditions.

Table 5.1

Comparison of all datasets

Name Number of Images Class Dimension
ORL 400 10 1024
MNIST 70,000 10 784
COIL20 1440 20 1024
CMU-PIE 41,368 68 1024

Image

5.1.4.2 Evaluation Metrics

We apply two widely-used measures to evaluate the performance of the clustering methods, the accuracy and the normalized mutual information (NMI) [8,12]. Suppose the predicted label of xiImage is yˆiImage, which is produced by the clustering method, and yiImage is the ground-truth label. The accuracy is defined as

Acc=IΦ(yˆi)yin,

Image (5.11)

where I is the indicator function, and Φ is the best permutation mapping function [26]. On the other hand, suppose the clusters obtained from the predicted labels {yˆi}i=1nImage and {yi}i=1nImage are CˆImage and C, respectively. The mutual information between CˆImage and C is defined as

MI(Cˆ,C)=cˆCˆ,cCp(cˆ,c)logp(cˆ,c)p(cˆ)p(c),

Image (5.12)

where p(cˆ)Image and p(c)Image are the probabilities that a data point belongs to the clusters CˆImage and C, respectively, and p(cˆ,c)Image is the probability that a data point jointly belongs to CˆImage and C. The normalized mutual information (NMI) is defined as

NMI(Cˆ,C)=MI(Cˆ,C)max{H(Cˆ),H(C)},

Image (5.13)

where H(Cˆ)Image and H(C)Image are the entropies of CˆImage and C, respectively. NMI takes values in [0,1].

5.1.4.3 Comparison Experiments

Comparison Methods

We compare the following eight methods on all four datasets:

  • •  KM, K-means clustering on the input data.
  • •  KM + SC, a dictionary D is first learned from the input data by K-SVD [20]. Then KM is performed on the graph-regularized sparse code features (5.1) over D.
  • •  EMC, entropy-minimization clustering, by minimizing (5.4) on the input data.
  • •  EMC + SC, EMC performed on the graph-regularized sparse codes over the pre-learned K-SVD dictionary D.
  • •  MMC, maximum-margin clustering [5].
  • •  MMC + SC, MMC performed on the graph-regularized sparse codes over the pre-learned K-SVD dictionary D.
  • •  Joint EMC, the proposed joint optimization (5.2), with C(A,w)Image as defined in (5.4).
  • •  Joint MMC, the proposed joint optimization (5.2), with C(A,w)Image as defined in (5.9).

All images are first reshaped into vectors, and PCA is then applied to reducing the data dimensionality by keeping 98% information, which is also used in [8] to improve efficiency. The multiclass MMC algorithm is implemented based on the publicly available CPMMC code for two-class clustering [5], following the multiclass case descriptions in the original paper. For all algorithms that involve graph-regularized sparse coding, the graph regularization parameter α is fixed to be 1, and the dictionary size p is 128 by default. For joint EMC and joint MMC, we set ITER as 30, ρ as 0.9, and t0Image as 5. Other parameters in competing methods are tuned in cross-validation experiments to our best efforts.

Comparison Analysis

All the comparison results (accuracy and NMI) are listed in Table 5.2, from which we could conclude the following:

  1. 1.  The joint EMC and joint MMC methods each outperform their “non-joint” counterparts, e.g., EMC + SC and MMC + SC, respectively. For example, on the ORL dataset, joint MMC surpasses MMC + SC by around 5% in accuracy and 7% in NMI. This demonstrates that the key contribution of this section, i.e., joint optimization of the sparse coding and clustering steps, indeed leads to improved performances.
  2. 2.  KM + SC, EMC + SC, and MMC + SC all outperform their counterparts using raw input data, which verifies that sparse codes are effective features that help improve the clustering discriminability.
  3. 3.  The joint MMC obtains the best performance in all cases, outperforming the others, including joint EMC, with significant margins. The MMC + SC obtains the second best performance for the last three datasets (for ORL, it is joint EMC that ranks second). The above facts reveal the power of the max-margin loss (5.9).

Table 5.2

Accuracy and NMI performance comparisons on all datasets

KM KM + SC EMC EMC + SC MMC MMC + SC joint EMC joint MMC
ORL Acc 0.5250 0.5887 0.6011 0.6404 0.6460 0.6968 0.7250 0.7458
NMI 0.7182 0.7396 0.7502 0.7795 0.8050 0.8043 0.8125 0.8728
MNIST Acc 0.6248 0.6407 0.6377 0.6493 0.6468 0.6581 0.6550 0.6784
NMI 0.5142 0.5397 0.5274 0.5671 0.5934 0.6161 0.6150 0.6451
COIL20 Acc 0.6280 0.7880 0.7399 0.7633 0.8075 0.8493 0.8225 0.8658
NMI 0.7621 0.9010 0.8621 0.8887 0.8922 0.8977 0.8850 0.9127
CMU-PIE Acc 0.3176 0.8457 0.7627 0.7836 0.8482 0.8491 0.8250 0.8783
NMI 0.6383 0.9557 0.8043 0.8410 0.9237 0.9489 0.9020 0.9675

Image

Varying the Number of Clusters

On the COIL20 dataset, we reconduct the clustering experiments with the cluster number K ranging from 2 to 20, using EMC + SC, MMC + SC, joint EMC, and joint MMC. For each K, except for 20, 10 test runs are conducted on different randomly chosen clusters, and the final scores are obtained by averaging over the 10 tests. Fig. 5.1 shows the clustering accuracy and NMI measurements versus the number of clusters. It is revealed that the two joint methods consistently outperform their non-joint counterparts. When K increases, the performance of joint methods seems to degrade less slowly.

Image
Figure 5.1 The clustering accuracy and NMI measurements versus the number of clusters K.
Initialization and Parameters

As a typical case in machine learning, we use SGD in a setting where it is not guaranteed to converge in theory, but behaves well in practice. As observed in our experiments, a good initialization of D and w can affect the final results notably. We initialize joint EMC by the D and w solved from EMC + SC, and joint MMC by the solutions from MMC + SC, respectively.

There are two parameters that we empirically set in ahead, the graph regularization parameter α and the dictionary size p. The regularization term imposes stronger smoothness constraints on the sparse codes when α grows larger. Also, while a compact dictionary is more desirable computationally, more redundant dictionaries may lead to less cluttered features that can be better discriminated. We investigate how the clustering performances EMC + SC, MMC + SC, joint EMC, and joint MMC change on the ORL dataset, with various α and p values. As depicted in Figs. 5.2 and 5.3, we observe that:

  1. 1.  While α increases, the accuracy result will first increase then decrease (the peak is around α=1Image). This could be interpreted as when α is too small, the local manifold information is not sufficiently encoded. On the other hand, when α turns overly large, the sparse codes are “oversmoothed” with a reduced discriminability.
  2. 2.  Increasing dictionary size p will first improve the accuracy sharply, which, however, soon reaches a plateau. Thus in practice, we keep a medium dictionary size p=128Image for all experiments.

Image
Figure 5.2 The clustering accuracy and NMI measurements versus the parameter choices of α.
Image
Figure 5.3 The clustering accuracy and NMI measurements versus the parameter choices of p.

5.1.5 Conclusion

We propose a joint framework to optimize sparse coding and discriminative clustering simultaneously. We adopt graph-regularized sparse codes as the feature to be learned, and design two clustering-oriented cost functions, by entropy-minimization and maximum-margin principles, respectively. The formulation of a task-driven bi-level optimization mutually reinforces both sparse coding and clustering steps. Experiments on several benchmark datasets verify the remarkable performance improvement led by the proposed joint optimization.

5.1.6 Appendix

We recall the following lemma [5.2] in [17]:

Theorem 5.2

Regularity of the elastic net solution

Consider the formulation in (5.1) (we may drop the last term to obtain the exact elastic net form, without affecting the differentiability conclusions). Assume λ2>0Image, and that XImage is compact. Then

  • •  a is uniformly Lipschitz on X×DImage.
  • •  Let DDImage, σ be a positive scalar and s be a vector in {1,0,1}pImage. Define Ks(D,σ)Image as the set of vectors x satisfying for all j in {1,,p}Image,

|djT(xDa)λ2a[j]|λ1σifs[j]=0,s[j]a[j]σifs[j]0.

Image (5.14)

  • Then there exists κ> 0 independent of s,DImage and σ so that for all xKs(D,σ)Image, the function a is twice continuously differentiable on Bκσ(x)×Bκσ(D)Image, where Bκσ(x)Image and Bκσ(D)Image denote the open balls of radius κσ respectively centered at x and D.

5.2 Learning a Task-Specific Deep Architecture for Clustering3

5.2.1 Introduction

While many classical clustering algorithms have been proposed, such as K-means, Gaussian mixture model (GMM) clustering [2], maximum-margin clustering [3] and information-theoretic clustering [6], most only work well when the data dimensionality is low. Since high-dimensional data exhibits dense grouping in low-dimensional embeddings [27], researchers have been motivated to first project the original data onto a low-dimensional subspace [10] and then cluster on the feature embeddings. Among many feature embedding learning methods, sparse codes [11] have proven to be robust and efficient features for clustering, as verified by many [12,8].

Effectiveness and scalability are two major concerns in designing a clustering algorithm under Big Data scenarios [28]. Conventional sparse coding models rely on iterative approximation algorithms, whose inherently sequential structure, as well as the data-dependent complexity and latency, often constitutes a major bottleneck in the computational efficiency [29]. This also results in the difficulty when one tries to jointly optimize the unsupervised feature learning and the supervised task-driven steps [17]. Such a joint optimization usually has to rely on solving complex bi-level optimization [30], such as in [31], which constitutes another efficiency bottleneck. What is more, to effectively model and represent datasets of growing sizes, sparse coding needs to refer to larger dictionaries [32]. Since the inference complexity of sparse coding increases more than linearly with respect to the dictionary size [31], the scalability of sparse coding-based clustering work turns out to be quite limited.

To conquer those limitations, we are motivated to introduce the tool of deep learning in clustering, to which there has been a lack of attention paid. The advantages of deep learning are achieved by its large learning capacity, the linear scalability with the aid of stochastic gradient descent (SGD), and the low inference complexity [33]. The feed-forward networks could be naturally tuned jointly with task-driven loss functions. On the other hand, generic deep architectures [34] largely ignore the problem-specific formulations and prior knowledge. As a result, one may encounter difficulties in choosing optimal architectures, interpreting their working mechanisms, and initializing the parameters.

In this section, we demonstrate how to combine the sparse coding-based pipeline into deep learning models for clustering. The proposed framework takes advantage of both sparse coding and deep learning. Specifically, the feature learning layers are inspired by the graph-regularized sparse coding inference process, via reformulating iterative algorithms [29] into a feed-forward network, named TAGnet. Those layers are then jointly optimized with the task-specific loss functions from end to end. Our technical novelty and merits are summarized as follows:

  • •  As a deep feed-forward model, the proposed framework provides extremely efficient inference process and high scalability to large scale data. It allows learning more descriptive features than conventional sparse codes.
  • •  We discover that incorporating the expertise of sparse code-based clustering pipelines [12,8] improves our performances significantly. Moreover, it greatly facilitates the model initialization and interpretation.
  • •  We further enforce auxiliary clustering tasks on the hierarchy of features, we develop DTAGnet and observe further performance boosts on the CMU MultiPIE dataset [35].

5.2.2 Related Work

5.2.2.1 Sparse Coding for Clustering

Assuming data samples X=[x1,x2,,xn]Image, where xiRm×1Image and i=1,2,,nImage. They are encoded into sparse codes A=[a1,a2,,an]Image, where aiRp×1Image and i=1,2,,nImage, using a learned dictionary D=[d1,d2,,dp]Image, where diRm×1,i=1,2,,pImage are the learned atoms. The sparse codes are obtained by solving the following convex optimization (λ is a constant) problem:

A=argminA12||XDA||F2+λi||ai||1,

Image (5.15)

In [12], the authors suggested that the sparse codes can be used to construct the similarity graph for spectral clustering [13]. Furthermore, to capture the geometric structure of local data manifolds, the graph regularized sparse codes are further suggested in [8,19] by solving

A=argminA12||XDA||F2+λi||ai||1+α2Tr(ALAT),

Image (5.16)

where L is the graph Laplacian matrix and can be constructed from a prechosen pairwise similarity (affinity) matrix P. More recently in [31], the authors suggested to simultaneously learn feature extraction and discriminative clustering, by formulating a task-driven sparse coding model [17]. They proved that such joint methods consistently outperformed non-joint counterparts.

5.2.2.2 Deep Learning for Clustering

In [36], the authors explored the possibility of employing deep learning in graph clustering. They first learned a nonlinear embedding of the original graph by an auto encoder (AE), followed by a K-means algorithm on the embedding to obtain the final clustering result. However, it neither exploits more adapted deep architectures nor performs any task-specific joint optimization. In [37], a deep belief network with nonparametric clustering was presented. As a generative graphical model, DBN provides a faster feature learning, but is less effective than AEs in terms of learning discriminative features for clustering. In [38], the authors extended the seminonnegative matrix factorization (Semi-NMF) model to a Deep Semi-NMF model, whose architecture resembles stacked AEs. Our proposed model is substantially different from all these previous approaches, due to its unique task-specific architecture derived from sparse coding domain expertise, as well as the joint optimization with clustering-oriented loss functions.

5.2.3 Model Formulation

The proposed pipeline consists of two blocks. As depicted in Fig. 5.4A, it is trained end-to-end in an unsupervised way. It includes a feed-forward architecture, termed Task-specific And Graph-regularized Network (TAGnet), to learn discriminative features, and the clustering-oriented loss function.

Image
Figure 5.4 (A) The proposed pipeline, consisting of the TAGnet network for feature learning, followed by the clustering-oriented loss functions. The parameters W,S,θImage and ω are all learnt end-to-end from training data. (B) The block diagram of solving (5.17).

5.2.3.1 TAGnet: Task-specific And Graph-regularized Network

Different from generic deep architectures, TAGnet is designed in a way to take advantage of the successful sparse code-based clustering pipelines [8,31]. It aims to learn features that are optimized under clustering criteria, while encoding graph constraints (5.16) to regularize the target solution. TAGnet is derived from the following theorem:

Theorem 5.3

The optimal sparse code A from (5.16) is the fixed point of

A=hλN[(I1NDTD)AA(αNL)+1NDTX],

Image (5.17)

where hθImage is an element-wise shrinkage function parameterized by θ:

[hθ(u)]i=sign(ui)(|ui|θi)+,

Image (5.18)

N is an upper bound on the largest eigenvalue of DTDImage.

The complete proof of Theorem 5.3 can be found in the Appendix. Theorem 5.3 outlines an iterative algorithm to solve (5.16). Under quite mild conditions [39], after A is initialized, one may repeat the shrinkage and thresholding process in (5.17) until convergence. Moreover, the iterative algorithm could be alternatively expressed as the block diagram in Fig. 5.4B, where

W=1NDT,S=I1NDTD,θ=λN.

Image (5.19)

In particular, we define the new operator “×LImage”: AαNALImage, where the input A is multiplied by the prefixed L from the right and scaled by the constant αNImage.

By time-unfolding and truncating Fig. 5.4B to a fixed number of K iterations (K=2Image by default),4 we obtain the TAGnet form in Fig. 5.4A; W, S and θ are all to be learnt jointly from data, while S and θ are tied weights for both stages.5 It is important to note that the output A of TAGnet is not necessarily identical to the predicted sparse codes by solving (5.16). Instead, the goal of TAGnet is to learn discriminative embedding that is optimal for clustering.

To facilitate training, we further rewrite (5.18) as

[hθ(u)]i=θisign(ui)(|ui|/θi1)+=θih1(ui/θi).

Image (5.20)

Equation (5.20) indicates that the original neuron with trainable thresholds can be decomposed into two linear scaling layers plus a unit-threshold neuron. The weights of the two scaling layers are diagonal matrices defined by θ and its element-wise reciprocal, respectively.

A notable component in TAGnet is the ×LImage branch of each stage. The graph Laplacian L could be computed in advance. In the feed-forward process, a ×LImage branch takes the intermediate ZkImage (k=1,2Image) as the input, and applies the “×LImage” operator defined above. The output is aggregated with the output from the learnable S layer. In the back propagation, L will not be altered. In such a way, the graph regularization is effectively encoded in the TAGnet structure as a prior.

An appealing highlight of (D)TAGnet lies in its very effective and straightforward initialization strategy. With sufficient data, many latest deep networks train well with random initializations without pretraining. However, it has been discovered that poor initializations hamper the effectiveness of first-order methods (e.g., SGD) in certain cases [40]. For (D)TAGnet, it is, however, much easier to initialize the model in the right regime. This benefits from the analytical relationships between sparse coding and network hyperparameters defined in (5.19): we could initialize deep models from corresponding sparse coding components, the latter of which is easier to obtain. Such an advantage becomes much more important when the training data is limited.

5.2.3.2 Clustering-Oriented Loss Functions

Assuming K clusters, and ω=[ω1,,ωK]Image as the set of parameters of the loss function, where ωiImage corresponds to the ith cluster, i=1,2,,KImage. In this section, we adopt the following two forms of clustering-oriented loss functions.

One natural choice of the loss function is extended from the popular softmax loss, and takes the entropy-like form as

C(A,ω)=i=1nj=1Kpijlogpij,

Image (5.21)

where pijImage denotes the the probability that sample xiImage belongs to cluster j, i=1,2,,NImage and j=1,2,,KImage,

pij=p(j|ω,ai)=eωjTail=1KeωlTai.

Image (5.22)

In testing, the predicted cluster label of input aiImage is determined using the maximum likelihood criterion based on the predicted pijImage.

The maximum margin clustering (MMC) approach was proposed in [3]. MMC finds a way to label the samples by running an SVM implicitly, and the SVM margin obtained would be maximized over all possible labels [5]. By referring to the MMC definition, the authors of [31] designed the max-margin loss as

C(A,ω)=λ2||ω||2+i=1nC(ai,ω).

Image (5.23)

In the above equation, the loss for an individual sample aiImage is defined as

C(ai,ω)=max(0,1+fri(ai)fyi(ai)),whereyi=argmaxj=1,,Kfj(ai),ri=argmaxj=1,,K,jyifj(ai),

Image (5.24)

where fjImage is the prototype for the jth cluster. In testing, the predicted cluster label of input aiImage is determined by weight vector that achieves the maximum ωjTaiImage.

Model Complexity. The proposed framework can handle large-scale and high-dimensional data effectively via the stochastic gradient descent (SGD) algorithm. In each step, the back propagation procedure requires only operations of order O(p) [29]. The training algorithm takes O(Cnp) time (C is a constant in terms of the total numbers of epochs, stage numbers, etc.). In addition, SGD is easy to be parallelized and thus could be efficiently trained using GPUs.

5.2.3.3 Connections to Existing Models

There is a close connection between sparse coding and neural network. In [29], a feed-forward neural network, named LISTA, is proposed to efficiently approximate the sparse code a of input signal x, which is obtained by solving (5.15) in advance. The LISTA network learns the hyperparameters as a general regression model from training data to their pre-solved sparse codes using back-propagation.

LISTA overlooks the useful geometric information among data points [8], and therefore could be viewed as a special case of TAGnet in Fig. 5.4 when α=0Image (i.e., removing the ×LImage branches). Moreover, LISTA aims to approximate the “optimal” sparse codes preobtained from (5.15), and therefore requires the estimation of D and the tedious precomputation of A. The authors did not exploit its potential in supervised and task-specific feature learning.

5.2.4 A Deeper Look: Hierarchical Clustering by DTAGnet

Deep networks are well known for their capabilities to learn semantically rich representations by hidden layers [41]. In this section, we investigate how the intermediate features ZkImage (k=1,2Image) in TAGnet (Fig. 5.4A) can be interpreted, and further utilized to improve the model, for specific clustering tasks. Compared to related non-deep models [31], such a hierarchical clustering property is another unique advantage of being deep.

Our strategy is mainly inspired by the algorithmic framework of deeply supervised nets [42]. As in Fig. 5.5, our proposed Deeply-Task-specific And Graph-regularized Network (DTAGnet) brings in additional deep feedbacks, by associating a clustering-oriented local auxiliary loss Ck(Zk,ωk)Image (k=1,2Image) with each stage. Such an auxiliary loss takes the same form as the overall C(A,ω)Image, except that the expected cluster number may be different, depending on the auxiliary clustering task to be performed. The DTAGnet backpropagates errors not only from the overall loss layer, but also simultaneously from the auxiliary losses.

Image
Figure 5.5 The DTAGnet architecture, taking the CMU MultiPIE dataset as an example. The model is able to simultaneously learn features for pose clustering (Z1), for expression clustering (Z2), and for identity clustering (A). The first two attributes are related to and helpful for the last (overall) task. Part of image sources are referred from [35] and [38].

While seeking the optimal performance of the target clustering, DTAGnet is also driven by two auxiliary tasks that are explicitly targeted at clustering specific attributes. It enforces constraint at each hidden representation for directly making a good cluster prediction. In addition to the overall loss, the introduction of auxiliary losses gives another strong push to obtain discriminative and sensible features at each individual stage. As discovered in the classification experiments in [42], the auxiliary loss both acts as feature regularization to reduce generalization errors and results in faster convergence. We also find in Sect. 5.2.5 that every ZkImage (k=1,2Image) is indeed most suited for its targeted task.

In [38], a Deep Semi-NMF model was proposed to learn hidden representations, which grant themselves an interpretation of clustering according to different attributes. The authors considered the problem of mapping facial images to their identities. A face image also contains attributes like pose and expression that help identify the person depicted. In their experiments, the authors found that by further factorizing this mapping in a way that each factor adds an extra layer of abstraction, the deep model could automatically learn latent intermediate representations that are implied for clustering identity-related attributes. Although there is a clustering interpretation, those hidden representations are not specifically optimized in clustering sense. Instead, the entire model is trained with only the overall reconstruction loss, after which clustering is performed using K-means on learnt features. Consequently, their clustering performance is not satisfactory. Our study shares the similar observation and motivation with [38], but in a more task-specific manner by performing the optimizations of auxiliary clustering tasks jointly with the overall task.

5.2.5 Experiment Results

5.2.5.1 Datasets and Measurements

We evaluate the proposed model on three publicly available datasets:

  • •  MNIST [8] consists of a total number of 70,000 quasi-binary, handwritten digit images, with digits 0 to 9. The digits are normalized and centered in fixed-size images of 28×28Image.
  • •  CMU MultiPIE [35] contains around 750,000 images of 337 subjects that are captured under varied laboratory conditions. A unique property of CMU MultiPIE lies in that each image comes with labels for the identity, illumination, pose and expression attributes. That is why CMU MultiPIE is chosen in [38] to learn multiattribute features (Fig. 5.5) for hierarchical clustering. In our experiments, we follow [38] and adopt a subset of 13,230 images of 147 subjects in 5 different poses and 6 different emotions. Notably, we do not preprocess the images by using piecewise affine warping as utilized by [38] to align these images.
  • •  COIL20 [43] contains 1440 32×32Image gray scale images of 20 objects (72 images per object). The images of each object were taken 5 degrees apart.

Although the paper only evaluates the proposed method using image datasets, the methodology itself is not limited to only image subjects. We apply two widely-used measures to evaluate the clustering performances, the accuracy and the normalized mutual information (NMI) [8,12]. We follow the convention of many clustering works [8,19,31] and do not distinguish training from testing. We train our models on all available samples of each dataset, reporting the clustering performances as our testing results. Results are averaged from 5 independent runs.

5.2.5.2 Experiment Settings

The proposed networks are implemented using the cuda-convnet package [34]. The network takes K=2Image stages by default. We apply a constant learning rate of 0.01 with no momentum to all trainable layers. The batch size of 128. In particular, to encode graph regularization as a prior, we fix L during model training by setting its learning rate to be 0. Experiments run on a workstation with 12 Intel Xeon 2.67 GHz CPUs and 1 GTX680 GPU. The training takes approximately 1 hour on the MNIST dataset. It is also observed that the training efficiency of our model scales approximately linearly with data.

In our experiments, we set the default value of α to be 5, p to be 128, and λ to be chosen from [0.1, 1] by cross-validation.6 A dictionary D is first learned from X by K-SVD [20]; W, S and θ are then initialized based on (5.19), while L is also pre-calculated from P, which is formulated by the Gaussian kernel, Pij=exp(||xixj||22δ2)Image (δ is also selected by cross-validation). After obtaining the output A from the initial (D)TAGnet models, ω (or ωkImage) could be initialized based on minimizing (5.21) or (5.23) over A (or ZkImage).

5.2.5.3 Comparison Experiments and Analysis

Benefits of the Task-specific Deep Architecture

We denote the proposed model of TAGnet plus entropy-minimization loss (EML) (5.21) as TAGnet-EML, and the one plus maximum-margin loss (MML) (5.23) as TAGnet-MML, respectively. We include the following comparison methods:

  • •  We refer to the initializations of the proposed joint models as their “Non-Joint” counterparts, denoted as NJ-TAGnet-EML and NJ-TAGnet-MML (NJ is short for non-joint), respectively.
  • •  We design a baseline encoder (BE), which is a fully-connected feed-forward network, consisting of three hidden layers of dimension p with ReLU neuron. It is obvious that the BE has the same parameter complexity as TAGnet.7 The BEs are also tuned by EML or MML in the same way, denoted as BE-EML or BE-MML, respectively. We intend to verify our important claim that the proposed model benefits from the task-specific TAGnet architecture, rather than just the large learning capacity of generic deep models.
  • •  We compare the proposed models with their closest “shallow” competitors, i.e., the joint optimization methods of graph-regularized sparse coding and discriminative clustering in [31]. We reimplement their work using both (5.21) or (5.23) losses, denoted as SC-EML and SC-MML (SC is short for sparse coding). Since in [31] the authors already revealed that SC-MML outperforms the classical methods such as MMC and 1Image-graph methods, we do not compare with them again.
  • •  We also include Deep Semi-NMF [38] as a state-of-the-art deep learning-based clustering method. We mainly compare our results with their reported performances on CMU MultiPIE.8

As revealed by the full comparison results in Table 5.3, the proposed task-specific deep architectures outperform others with a noticeable margin. The underlying domain expertise guides the data-driven training in a more principled way. In contrast, the “general-architecture” baseline encoders (BE-EML and BE-MML) appear to produce much worse (even worst) results. Furthermore, it is evident that the proposed end-to-end optimized models outperform their “non-joint” counterparts. For example, on the MNIST dataset, TAGnet-MML surpasses NJ-TAGnet-MML by around 4% in accuracy and 5% in NMI.

Table 5.3

Accuracy and NMI performance comparisons on all three datasets

TAGnet-EML TAGnet-MML NJ-TAGnet-EML NJ-TAGnet-MML BE-EML BE-MML SC-EML SC-MML Deep Semi-NMF
MNIST Acc 0.6704 0.6922 0.6472 0.5052 0.5401 0.6521 0.6550 0.6784 /
NMI 0.6261 0.6511 0.5624 0.6067 0.5002 0.5011 0.6150 0.6451 /
CMU Acc 0.2176 0.2347 0.1727 0.1861 0.1204 0.1451 0.2002 0.2090 0.17
MultiPIE NMI 0.4338 0.4555 0.3167 0.3284 0.2672 0.2821 0.3337 0.3521 0.36
COIL20 Acc 0.8553 0.8991 0.7432 0.7882 0.7441 0.7645 0.8225 0.8658 /
NMI 0.9090 0.9277 0.8707 0.8814 0.8028 0.8321 0.8850 0.9127 /

Image

By comparing the TAGnet-EML/TAGnet-MML with SC-EML/SC-MML, we draw a promising conclusion: adopting a more parameterized deep architecture allows a larger feature learning capacity compared to conventional sparse coding. Although similar points are well made in many other fields [34], we are interested in a closer look between the two. Fig. 5.6 plots the clustering accuracy and NMI curves of TAGnet-EML/TAGnet-MML on the MNIST dataset, along with iteration numbers. Each model is well initialized at the very beginning, and the clustering accuracy and NMI are computed every 100 iterations. At first, the clustering performances of deep models are even slightly worse than sparse-coding methods, mainly since the initialization of TAGnet hinges on a truncated approximation of graph-regularized sparse coding. After a small number of iterations, the performance of the deep models surpass sparse coding ones, and continue rising monotonically until reaching a higher plateau.

Image
Figure 5.6 The accuracy and NMI plots of TAGnet-EML/TAGnet-MML on MNIST, starting from the initialization, and tested every 100 iterations. The accuracy and NMI of SC-EML/SC-MML are also plotted as baselines.
Effects of Graph Regularization

In (5.16), the graph regularization term imposes stronger smoothness constraints on the sparse codes with a larger α. It also happens to the TAGnet. We investigate how the clustering performances of TAGnet-EML/TAGnet-MML are influenced by various α values. From Fig. 5.7, we observe the identical general tendency on all three datasets. While α increases, the accuracy/NMI result will first rise then decrease, with the peak appearing for α[5,10]Image. As an interpretation, the local manifold information is not sufficiently encoded when α is too small (α=0Image will completely disable the ×LImage branch of TAGnet, and reduce it to the LISTA network [29] fine-tuned by the losses). On the other hand, when α is large, the sparse codes are “oversmoothed” with a reduced discriminative ability. Note that similar phenomena are also reported in other relevant literature, e.g., [8,31].

Image
Figure 5.7 The clustering accuracy and NMI plots (x-axis logarithm scale) of TAGnet-EML/TAGnet-MML versus the parameter choices of α, on: (A)–(B) MNIST; (C)–(D) CMU MultiPIE; (E)–(F) COIL20.

Furthermore, comparing Fig. 5.7A–F, it is noteworthy to observe how graph regularization behaves differently on three of them. We notice that the COIL20 dataset is the most sensitive to the choice of α. Increasing α from 0.01 to 50 leads to an improvement of more than 10%, in terms of both accuracy and NMI. It verifies the significance of graph regularization when training samples are limited [19]. On the MNIST dataset, both models obtain a gain of up to 6% in accuracy and 5% in NMI, by tuning α from 0.01 to 10. However, unlike COIL20 that almost always favors larger α, the model performance on the MNIST dataset tends to be not only saturated, but even significantly hampered when α continues rising to 50. The CMU MultiPIE dataset witnesses moderate improvements of around 2% in both measurements. It is not as sensitive to α as the other two. Potentially, it might be due to the complex variability in original images that makes the graph W unreliable for estimating the underlying manifold geometry. We suspect that more sophisticated graphs may help alleviate the problem, and we will explore it in the future.

Scalability and Robustness

On the MNIST dataset, we reconduct the clustering experiments with the cluster number NcImage ranging from 2 to 10, using TAGnet-EML/TAGnet-MML. Fig. 5.8 shows that the clustering accuracy and NMI change by varying the number of clusters. The clustering performance transits smoothly and robustly when the task scale changes.

Image
Figure 5.8 The clustering accuracy and NMI plots of TAGnet-EML/TAGnet-EML versus the cluster number Nc ranging from 2 to 10 on MNIST.

To examine the proposed models' robustness to noise, we add various Gaussian noise, whose standard deviation s ranges from 0 (noiseless) to 0.3, to retrain our MNIST model. Fig. 5.9 indicates that both TAGnet-EML and TAGnet-MML own certain robustness to noise. When s is less than 0.1, there is even little visible performance degradation. While TAGnet-MML constantly outperforms TAGnet-EML in all experiments (as MMC is well-known to be highly discriminative [3]), it is interesting to observe in Fig. 5.9 that the latter is slightly more robust to noise than the former. It is perhaps owing to the probability-driven loss form (5.21) of EML that allows for more flexibility.

Image
Figure 5.9 The clustering accuracy and NMI plots of TAGnet-EML/TAGnet-MML versus the noise level s on MNIST.

5.2.5.4 Hierarchical Clustering on CMU MultiPIE

As observed, CMU MultiPIE is very challenging for the basic identity clustering task. However, it comes with several other attributes: pose, expression, and illumination, which could be of assistance in our proposed DTAGnet framework. In this section, we apply a similar setting of [38] on the same CMU MultiPIE subset, by setting pose clustering as the Stage I auxiliary task, and expression clustering as the Stage II auxiliary task.9 In that way, we target C1(Z1,ω1)Image at 5 clusters, C2(Z2,ω2)Image at 6 clusters, and finally, C(A,ω)Image at 147 clusters.

The training of DTAGnet-EML/DTAGnet-MML follows the same aforementioned process except for considered extra back-propagated gradients from task Ck(Zk,ωk)Image in Stage k (k=1,2Image). After that we test each Ck(Zk,ωk)Image separately on their targeted task. In DTAGnet, each auxiliary task is also jointly optimized with its intermediate feature ZkImage, which differentiates our methodology substantially from [38]. It is thus no surprise to see in Table 5.4 that each auxiliary task obtains much improved performances than [38].10 Most notably, the performances of the overall identity clustering task witness a very impressive boost of around 7% in accuracy. We also test DTAGnet-EML/DTAGnet-MML with only C1(Z1,ω1)Image or C2(Z2,ω2)Image kept. Experiments verify that by adding auxiliary tasks gradually, the overall task keeps being benefited. Those auxiliary tasks, when enforced together, can also reinforce each other mutually.

Table 5.4

Effects of incorporating auxiliary clustering tasks in DTAGnet-EML/DTAGnet-MML (P, Pose; E, Expression; I, Identity)

Method Stage I Stage II Overall
Task Acc Task Acc Task Acc
DTAGnet-EML / / / / I 0.2176
P 0.5067 / / I 0.2303
/ / E 0.3676 I 0.2507
P 0.5407 E 0.7027 I 0.2833
DTAGnet-MML / / / / I 0.2347
P 0.5251 / / I 0.2635
/ / E 0.3988 I 0.2858
P 0.5538 E 0.4231 I 0.3021

Image

One might be curious of which one matters more in the performance boost: the deeply task-specific architecture that brings extra discriminative feature learning, or the proper design of auxiliary tasks that capture the intrinsic data structure characterized by attributes.

To answer this important question, we vary the target cluster number in either C1(Z1,ω1)Image or C2(Z2,ω2)Image, and reconduct the experiments. Table 5.5 reveals that more auxiliary tasks, even those without any straightforward task-specific interpretation (e.g., partitioning the Multi-PIE subset into 4, 8, 12 or 20 clusters hardly makes semantic sense), may still help gain better performances. It is comprehensible that they simply promote more discriminative feature learning in a low-to-high, coarse-to-fine scheme. In fact, it is a complementary observation to the conclusion found in classification [42]. On the other hand, at least in this specific case, while the target cluster numbers of auxiliary tasks get closer to the ground truth (5 and 6 here), the models seem to achieve the best performances. We conjecture that when properly “matched”, every hidden representation in each layer is in fact most suited for clustering the attributes corresponding to the layer of interest. The whole model can be resembled to the problem of sharing low-level feature filters among several relevant high-level tasks in convolutional networks [44], but in a distinct context.

Table 5.5

Effects of varying target cluster numbers of auxiliary tasks in DTAGnet-EML/DTAGnet-MML

Method #clusters in Stage I #clusters in Stage II Overall Accuracy
DTAGnet-EML 4 4 0.2827
8 8 0.2813
12 12 0.2802
20 20 0.2757
DTAGnet-MML 4 4 0.3030
8 8 0.3006
12 12 0.2927
20 20 0.2805

Image

We hence conclude that the deeply-supervised fashion shows to be helpful for the deep clustering models, even when there are no explicit attributes for constructing a practically meaningful hierarchical clustering problem. However, it is preferable to exploit those attributes when available, as they lead to not only superior performances but more clearly interpretable models. The learned intermediate features can be potentially utilized for multitask learning [45].

5.2.6 Conclusion

In this section, we present a deep learning-based clustering framework. Trained from end to end, it features a task-specific deep architecture inspired by the sparse coding domain expertise, which is then optimized under clustering-oriented losses. Such a well-designed architecture leads to more effective initialization and training, and significantly outperforms generic architectures of the same parameter complexity. The model could be further interpreted and enhanced, by introducing auxiliary clustering losses to the intermediate features. Extensive experiments verify the effectiveness and robustness of the proposed models.

References

[1] R.O. Duda, P.E. Hart, D.G. Stork, Pattern classification. John Wiley & Sons; 1999.

[2] C. Biernacki, G. Celeux, G. Govaert, Assessing a mixture model for clustering with the integrated completed likelihood, IEEE TPAMI 2000;22(7):719–725.

[3] L. Xu, J. Neufeld, B. Larson, D. Schuurmans, Maximum margin clustering, NIPS. 2004:1537–1544.

[4] B. Zhao, F. Wang, C. Zhang, Efficient multiclass maximum margin clustering, Proceedings of the 25th international conference on Machine learning. ACM; 2008:1248–1255.

[5] B. Zhao, F. Wang, C. Zhang, Efficient maximum margin clustering via cutting plane algorithm, SDM. 2008.

[6] X. Li, K. Zhang, T. Jiang, Minimum entropy clustering and applications to gene expression analysis, CSB. IEEE; 2004:142–151.

[7] D. Barber, F.V. Agakov, Kernelized infomax clustering, Advances in neural information processing systems. 2005:17–24.

[8] M. Zheng, J. Bu, C. Chen, C. Wang, L. Zhang, G. Qiu, et al., Graph regularized sparse coding for image representation, IEEE TIP 2011;20(5):1327–1336.

[9] Li Zhang, Wei-da Zhou, Li-cheng Jiao, Kernel clustering algorithm, Chinese Journal of Computers 2002;6, 004.

[10] V. Roth, T. Lange, Feature selection in clustering problems, NIPS. 2003.

[11] J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, Y. Ma, Robust face recognition via sparse representation, IEEE TPAMI 2009;31(2):210–227.

[12] B. Cheng, J. Yang, S. Yan, Y. Fu, T.S. Huang, Learning with l1 graph for image analysis, IEEE TIP 2010;19(4).

[13] A.Y. Ng, M.I. Jordan, Y. Weiss, et al., On spectral clustering: analysis and an algorithm, NIPS 2002;2:849–856.

[14] P. Sprechmann, G. Sapiro, Dictionary learning and sparse coding for unsupervised clustering, Acoustics speech and signal processing (ICASSP), 2010 IEEE international conference on. IEEE; 2010:2042–2045.

[15] Y.C. Chen, C.S. Sastry, V.M. Patel, P.J. Phillips, R. Chellappa, In-plane rotation and scale invariant clustering using dictionaries, IEEE Transactions on Image Processing 2013;22(6):2166–2180.

[16] Y. Yang, Z. Wang, J. Yang, J. Han, T.S. Huang, Regularized 1Image-graph for data clustering, Proceedings of the British machine vision conference. 2014.

[17] J. Mairal, F. Bach, J. Ponce, Task-driven dictionary learning, IEEE TPAMI 2012;34(4):791–804.

[18] Z. Wang, N.M. Nasrabadi, T.S. Huang, Semisupervised hyperspectral classification using task-driven dictionary learning with Laplacian regularization, IEEE Transactions on Geoscience and Remote Sensing 2015;53(3):1161–1173.

[19] Y. Yang, Z. Wang, J. Yang, J. Wang, S. Chang, T.S. Huang, Data clustering by Laplacian regularized 1Image-graph, AAAI. 2014.

[20] M. Aharon, M. Elad, A. Bruckstein, K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation, IEEE TSP 2006.

[21] J. Yang, Z. Wang, Z. Lin, X. Shu, T. Huang, Bilevel sparse coding for coupled feature spaces, CVPR. IEEE; 2012:2360–2367.

[22] B. Dai, B. Hu, Minimum conditional entropy clustering: a discriminative framework for clustering, ACML. 2010:47–62.

[23] Y.F. Li, I.W. Tsang, J.T. Kwok, Z.H. Zhou, Tighter and convex maximum margin clustering, International conference on artificial intelligence and statistics. 2009:344–351.

[24] K. Crammer, Y. Singer, On the algorithmic implementation of multiclass kernel-based vector machines, The Journal of Machine Learning Research 2002;2:265–292.

[25] C.P. Lee, C.J. Lin, A study on l2-loss (squared hinge-loss) multiclass SVM, Neural Computation 2013;25(5):1302–1323.

[26] L. Lovász, M. Plummer, Matching theory, vol. 367. American Mathematical Soc.; 2009.

[27] F. Nie, D. Xu, I.W. Tsang, C. Zhang, Spectral embedded clustering, IJCAI. 2009:1181–1186.

[28] S. Chang, W. Han, J. Tang, G. Qi, C. Aggarwal, T.S. Huang, Heterogeneous network embedding via deep architectures, ACM SIGKDD. 2015.

[29] K. Gregor, Y. LeCun, Learning fast approximations of sparse coding, ICML. 2010:399–406.

[30] D.P. Bertsekas, Nonlinear Programming 1999.

[31] Z. Wang, Y. Yang, S. Chang, J. Li, S. Fong, T.S. Huang, A joint optimization framework of sparse coding and discriminative clustering, IJCAI. 2015.

[32] H. Lee, A. Battle, R. Raina, A.Y. Ng, Efficient sparse coding algorithms, NIPS. 2006:801–808.

[33] Y. Bengio, Learning deep architectures for ai, Foundations and Trends in Machine Learning 2009;2(1):1–127.

[34] A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, NIPS. 2012:1097–1105.

[35] R. Gross, I. Matthews, J. Cohn, T. Kanade, S. Baker, Multi-PIE, Image and Vision Computing 2010;28(5).

[36] F. Tian, B. Gao, Q. Cui, E. Chen, T.Y. Liu, Learning deep representations for graph clustering, AAAI. 2014.

[37] G. Chen, Deep learning with nonparametric clustering, arXiv preprint arXiv:1501.03084; 2015.

[38] G. Trigeorgis, K. Bousmalis, S. Zafeiriou, B. Schuller, A deep semi-NMF model for learning hidden representations, ICML. 2014:1692–1700.

[39] A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences 2009;2(1):183–202.

[40] I. Sutskever, J. Martens, G. Dahl, G. Hinton, On the importance of initialization and momentum in deep learning, ICML. 2013:1139–1147.

[41] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, et al., DeCAF: a deep convolutional activation feature for generic visual recognition, arXiv preprint arXiv:1310.1531; 2013.

[42] C.Y. Lee, S. Xie, P. Gallagher, Z. Zhang, Z. Tu, Deeply-supervised nets, arXiv preprint arXiv:1409.5185; 2014.

[43] Nene SA, Nayar SK, Murase H, et al. Columbia object image library (COIL-20). Tech. rep.

[44] X. Glorot, A. Bordes, Y. Bengio, Domain adaptation for large-scale sentiment classification: a deep learning approach, ICML. 2011:513–520.

[45] Y. Wang, D. Wipf, Q. Ling, W. Chen, I. Wassail, Multi-task learning for subspace segmentation, ICML. 2015.


1  “Reprinted, with permission, from Wang, Zhangyang, Yang, Yingzhen, Chang, Shiyu, Li, Jinyan, Fong, Simon, and Huang, Thomas S. “A joint optimization framework of sparse coding and discriminative clustering”, IJCAI (2015).”

2  “To avoid ambiguity, if yiImage and riImage are the same, i.e., the max value is reached by two cluster prototypes simultaneously in current iteration, then we ignore the gradient update corresponding to aiImage.”

3  “©2016 Society for Industrial and Applied Mathematics. Reprinted with permission, from Wang, Zhangyang, Chang, Shiyu, Zhou, Jiayu, Wang, Meng, and Huang, Thomas S. “Learning a task-specific deep architecture for clustering”, SDM (2016). All rights reserved.”

4  “We tested larger K values (3 or 4), but they did not bring noticeable performance improvements in our clustering cases.”

5  “Out of curiosity, we have also tried the architecture that treats W, S and θ in both stages as independent variables. We found that sharing parameters improves performance.”

6  “The default values of α and p are inferred from the related sparse coding literature [8], and validated in experiments.”

7  “Except for the “θ” layers, each of which contains only p free parameters and is thus ignored.”

8  “With various component numbers tested in [38], we choose their best cases (60 components).”

9  “In fact, although claimed to be applicable to multiple attributes, [38] only examined the first level features for pose clustering without considering expressions, since it relied on a warping technique to preprocess images, which gets rid of most expression variability.”

10  “In [38], Table 2 it reports that the best accuracy of pose clustering task falls around 28%, using the most suited layer features.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.152.58