Deep Structural Contrastive Subspace Clustering

Bo Peng (The University of Queensland); Wenjie Zhu (China Jiliang University)*


Deep subspace clustering based on data self-expression is devoted to learning pairwise affinities in the latent feature space. Existing methods tend to rely on an autoencoder framework to learn representations for an affinity matrix. However, the representation learning driven largely by pixel-level data reconstruction is somewhat incompatible with the subspace clustering task. With the unavailability of ground truth, can structural representations, which is exactly what subspace clustering favors, be achieved by simply exploiting the supervision information in the data itself? In this paper, we formulate this intuition as a structural contrastive prediction task and propose an end-to-end trainable framework referred as Deep Structural Contrastive Subspace Clustering (DSCSC). Specifically, DSCSC makes use of data augmentation technique to mine positive pairs and constructs a data similarity graph in the embedding feature space to search negative pairs. A novel structural contrastive loss is proposed on the latent representations to achieve positive-concentrated and negative-separated property for subspace preserving. Extensive experiments on the benchmark datasets demonstrate that our method outperforms the state-of-the-art deep subspace clustering methods and imply the necessity of the proposed structural contrastive loss.