Citation: | X. Q. Yan, K. Deng, Q. Zou, Z. Tian, and H. Yu, “Self-cumulative contrastive graph clustering,” IEEE/CAA J. Autom. Sinica, 2024. doi: 10.1109/JAS.2024.125025 |
[1] |
L. Yang, C. Lv, X. Wang, J. Qiao, W. Ding, J. Zhang, and F. Wang, “Collective entity alignment for knowledge fusion of power grid dispatching knowledge graphs,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 11, pp. 1990–2004, 2022. doi: 10.1109/JAS.2022.105947
|
[2] |
X. Wang, S. Zhao, L. Guo, L. Zhu, C. Cui, and L. Xu, “Graphca: Learning from graph counterfactual augmentation for knowledge tracing,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 11, pp. 2108–2123, 2023. doi: 10.1109/JAS.2023.123678
|
[3] |
X. Xue, X. Yu, D. Zhou, X. Wang, C. Bi, S. Wang, and F. Wang, “Computational experiments for complex social systems: Integrated design of experiment system,” IEEE/CAA Journal of Automatica Sinica, vol. 11, no. 5, pp. 1175–1189, 2024. doi: 10.1109/JAS.2023.123639
|
[4] |
X. Yan, Y. Ye, X. Qiu, M. Manic, and H. Yu, “Cmib: unsupervised image object categorization in multiple visual contexts,” IEEE Trans. Industrial Informatics, vol. 16, no. 6, pp. 3974–3986, 2019.
|
[5] |
X. Yan, Y. Mao, M. Li, Y. Ye, and H. Yu, “Multitask image clustering via deep information bottleneck,” IEEE Trans. Cybernetics, pp. 1–14, 2023.
|
[6] |
Z. Wei, H. Zhao, Z. Li, X. Bu, Y. Chen, X. Zhang, Y. Lv, and F. Wang, “STGSA: A novel spatial-temporal graph synchronous aggregation model for traffic prediction,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 1, pp. 226–238, 2023. doi: 10.1109/JAS.2023.123033
|
[7] |
J. Li, R. Zheng, H. Feng, M. Li, and X. Zhuang, “Permutation equivariant graph framelets for heterophilous graph learning,” IEEE Trans. Neural Networks and Learning Systems, vol. Early Access, 2024.
|
[8] |
M. Li, A. Micheli, Y. G. Wang, S. Pan, P. Lió, G. S. Gnecco, and M. Sanguineti, “Guest editorial: Deep neural networks for graphs: theory, models, algorithms, and applications,” IEEE Trans. Neural Networks and Learning Systems, vol. 35, no. 4, pp. 4367–4372, 2024. doi: 10.1109/TNNLS.2024.3371592
|
[9] |
A. Bessadok, M. A. Mahjoub, and I. Rekik, “Graph neural networks in network neuroscience,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 45, no. 5, pp. 5833–5848, 2023. doi: 10.1109/TPAMI.2022.3209686
|
[10] |
C. Huang, M. Li, F. Cao, H. Fujita, Z. Li, and X. Wu, “Are graph convolutional networks with random weights feasible?,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 2751–2768, 2023. doi: 10.1109/TPAMI.2022.3183143
|
[11] |
S. Wang, X. Lin, Z. Fang, S. Du, and G. Xiao, “Contrastive consensus graph learning for multi-view clustering,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 11, pp. 2027–2030, 2022. doi: 10.1109/JAS.2022.105959
|
[12] |
T. N. Kipf and M. Welling, “Variational graph auto-encoders,” CoRR, vol. abs/1611.07308, 2016.
|
[13] |
X. Yan, X. Yu, S. Hu, and Y. Ye, “Mutual boost network for attributed graph clustering,” Expert Systems with Applications, vol. 229, p. 120479, 2023. doi: 10.1016/j.eswa.2023.120479
|
[14] |
W. Tu, S. Zhou, X. Liu, X. Guo, Z. Cai, E. Zhu, and J. Cheng, “Deep fusion clustering network,” in Proc. AAAI Conf. on Artificial Intelligence, 2021, pp. 9978–9987.
|
[15] |
X. Yan, Z. Jin, F. Han, and Y. Ye, “Differentiable information bottleneck for deterministic multi-view clustering,” in Proc. the IEEE Conf. on Computer Vision and Pattern Recognition, 2024, pp. 27435–27444.
|
[16] |
X. Yan, Y. Mao, Y. Ye, and H. Yu, “Cross-modal clustering with deep correlated information bottleneck method,” IEEE Trans. Neural Networks and Learning Systems, vol. Early access, 2023.
|
[17] |
X. Yan, Y. Gan, Y. Mao, Y. Ye, and H. Yu, “Live and learn: Continual action clustering with incremental views,” in Proc. the AAAI Conf. on Artificial Intelligence, vol. 38, no. 15, 2024, pp. 16264–16271.
|
[18] |
A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: analysis and an algorithm,” in Proc. Advances in Neural Information Processing Systems, 2001, pp. 849–856.
|
[19] |
M. Stoer and F. Wagner, “A simple min-cut algorithm,” Journal of the ACM, vol. 44, no. 4, pp. 585–591, 1997. doi: 10.1145/263867.263872
|
[20] |
Z. Lin and Z. Kang, “Graph filter-based multi-view attributed graph clustering,” in Proc. Int. Joint Conf. on Artificial Intelligence, 2021, pp. 2723–2729.
|
[21] |
D. Bo, X. Wang, C. Shi, M. Zhu, E. Lu, and P. Cui, “Structural deep clustering network,” in Proc. the Web Conf., 2020, pp. 1400–1410.
|
[22] |
H. Xu, W. Xia, Q. Gao, J. Han, and X. Gao, “Graph embedding clustering: Graph attention auto-encoder with cluster-specificity distribution,” Neural Networks, vol. 142, pp. 221–230, 2021. doi: 10.1016/j.neunet.2021.05.008
|
[23] |
C. Gao, J. Zhu, F. Zhang, Z. Wang, and X. Li, “A novel representation learning for dynamic graphs based on graph convolutional networks,” IEEE Trans. Cybernetics, vol. 53, no. 6, pp. 3599–3612, 2023. doi: 10.1109/TCYB.2022.3159661
|
[24] |
X. Yang, C. Deng, K. Wei, J. Yan, and W. Liu, “Adversarial learning for robust deep clustering,” vol. 33, pp. 9098–9108, 2020.
|
[25] |
S. Pan, R. Hu, S. Fung, G. Long, J. Jiang, and C. Zhang, “Learning graph embedding with adversarial training methods,” IEEE Trans. Cybernetics, vol. 50, no. 6, pp. 2475–2487, 2020. doi: 10.1109/TCYB.2019.2932096
|
[26] |
G. K. Kulatilleke, M. Portmann, and S. S. Chandra, “SCGC: Self-supervised contrastive graph clustering,” CoRR, vol. abs/2204.12656, 2022.
|
[27] |
Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Graph contrastive learning with adaptive augmentation,” in Proc. the Web Conf., 2021, pp. 2069–2080.
|
[28] |
Z. Zhou, Y. Hu, Y. Zhang, J. Chen, and H. Cai, “Multiview deep graph infomax to achieve unsupervised graph embedding,” IEEE Trans. Cybernetics, vol. 53, no. 10, pp. 6329–6339, 2023. doi: 10.1109/TCYB.2022.3163721
|
[29] |
H. Zhong, J. Wu, C. Chen, J. Huang, M. Deng, L. Nie, Z. Lin, and X. Hua, “Graph contrastive clustering,” in Proc. IEEE Int. Conf. on Computer Vision, 2021, pp. 9204–9213.
|
[30] |
W. Xia, Q. Wang, Q. Gao, M. Yang, and X. Gao, “Self-consistent contrastive attributed graph clustering with pseudo-label prompt,” IEEE Trans. Multimedia, vol. 25, pp. 6665–6677, 2022.
|
[31] |
Y. Liu, M. Jin, S. Pan, C. Zhou, Y. Zheng, F. Xia, and P. S. Yu, “Graph self-supervised learning: A survey,” IEEE Trans. Knowledge and Data Engineering, vol. 35, no. 6, pp. 5879–5900, 2023.
|
[32] |
X. Yang, Y. Liu, S. Zhou, S. Wang, W. Tu, Q. Zheng, X. Liu, L. Fang, and E. Zhu, “Cluster-guided contrastive graph clustering network,” in Proc. AAAI Conf. on Artificial Intelligence, 2023, pp. 10834–10842.
|
[33] |
J. Zhou, J. Shen, and Q. Xuan, “Data augmentation for graph classification,” in Proc. the ACM Int. Conf. on Information and Knowledge Management, 2020, pp. 2341–2344.
|
[34] |
W. Li, E. Zhu, S. Wang, and X. Guo, “Graph clustering with high-order contrastive learning,” Entropy, vol. 25, no. 10, p. 1432, 2023. doi: 10.3390/e25101432
|
[35] |
Z. Hou, X. Liu, Y. Cen, Y. Dong, H. Yang, C. Wang, and J. Tang, “Graphmae: Self-supervised masked graph autoencoders,” in Proc. the ACM SIGKDD Conf. on Knowledge Discovery and Data Mining, Washington, 2022, pp. 594–604.
|
[36] |
T. Wang, G. Yang, Q. He, Z. Zhang, and J. Wu, “Ncagc: A neighborhood contrast framework for attributed graph clustering,” arXiv, 2022.
|
[37] |
Y. Liu, X. Yang, S. Zhou, X. Liu, S. Wang, K. Liang, W. Tu, and L. Li, “Simple contrastive graph clustering,” IEEE Trans. Neural Networks and Learning Systems, pp. 1–12, 2023.
|
[38] |
Y. Hu, H. You, Z. Wang, Z. Wang, E. Zhou, and Y. Gao, “Graph-mlp: Node classification without message passing in graph,” CoRR, vol. abs/2106.04051, 2021.
|
[39] |
H. Zhao, X. Yang, K. Wei, C. Deng, and D. Tao, “Unsupervised graph transformer with augmentation-free contrastive learning,” IEEE Trans. Knowledge and Data Engineering, vol. Early access, pp. 1–12, 2024.
|
[40] |
Y. Wang, D. Chang, Z. Fu, J. Wen, and Y. Zhao, “Graph contrastive partial multi-view clustering,” IEEE Trans. Multimedia, vol. 25, pp. 6551–6562, 2023. doi: 10.1109/TMM.2022.3210376
|
[41] |
G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006. doi: 10.1126/science.1127647
|
[42] |
T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Int. Conf. on Learning Representations, 2017.
|
[43] |
Z. Peng, H. Liu, Y. Jia, and J. Hou, “Attention-driven graph clustering network,” in Proc. ACM Int. Conf. on Multimedia, 2021, pp. 935–943.
|
[44] |
J. Xie, R. B. Girshick, and A. Farhadi, “Unsupervised deep embedding for clustering analysis,” in Proc. Int. Conf. on Machine Learning, 2016, pp. 478–487.
|
[45] |
C. Wang, S. Pan, R. Hu, G. Long, J. Jiang, and C. Zhang, “Attributed graph clustering: A deep attentional embedding approach,” in Proc. Int. Joint Conf. on Artificial Intelligence, 2019, pp. 3670–3676.
|
[46] |
N. Lee, J. Lee, and C. Park, “Augmentation-free self-supervised learning on graphs,” in Proc. the AAAI Conf. on Artificial Intelligence, 2022, pp. 7372–7380.
|
[47] |
J. Yu, H. Yin, X. Xia, T. Chen, L. Cui, and Q. V. H. Nguyen, “Are graph augmentations necessary? simple graph contrastive learning for recommendation,” in Proc. the Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, 2022, pp. 1294–1303.
|
[48] |
Y. Min, F. Wenkel, and G. Wolf, “Scattering gcn: Overcoming oversmoothness in graph convolutional networks,” vol. 33, pp. 14498–14508, 2020.
|
[49] |
L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.,” Journal of Machine Learning Research, vol. 9, no. 11, pp. 2579–2605, 2008.
|
[50] |
K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in ICCV, 2015, pp. 1026–1034.
|
[51] |
G. Namata, B. London, L. Getoor, B. Huang, and U. Edu, “Query-driven active surveying for collective classification,” in Proc. the Int. Workshop on Mining and Learning with Graphs, vol. 8, 2012, p. 1.
|
[52] |
D. D. Lewis, Y. Yang, T. G. Rose, and F. Li, “RCV1: A new benchmark collection for text categorization research,” Journal of Machine Learning Research, vol. 5, pp. 361–397, 2004.
|
[53] |
G. Guo, H. Wang, D. Bell, Y. Bi, and K. Greer, “Knn model-based approach in classification,” in Proc. on The Move to Meaningful Internet Systems, 2003, pp. 986–996.
|
[54] |
K. Golalipour, E. Akbari, S. S. Hamidi, M. Lee, and R. Enayatifar, “From clustering to clustering ensemble selection: A review,” Engineering Applications of Artificial Intelligence, p. 104388, 2021.
|
[55] |
X. He, B. Wang, Y. Hu, J. Gao, Y. Sun, and B. Yin, “Parallelly adaptive graph convolutional clustering model,” IEEE Trans. Neural Networks and Learning Systems, vol. 35, no. 4, pp. 4451–4464, 2024. doi: 10.1109/TNNLS.2022.3176411
|
[56] |
D. Xia, X. Wang, N. Liu, and C. Shi, “Learning invariant representations of graph neural networks via cluster generalization,” in Proc. Advances in Neural Information Processing Systems, vol. 36, 2024.
|
[57] |
Y.-K. Xu, D. Huang, C.-D. Wang, and J.-H. Lai, “Glac-gcn: Global and local topology-aware contrastive graph clustering network,” IEEE Trans. Artificial Intelligence, vol. Early access, pp. 1–12, 2024.
|
[58] |
D. Shi, L. Zhu, Y. Li, J. Li, and X. Nie, “Robust structured graph clustering,” IEEE Trans. Neural Networks and Learning Systems, vol. 31, no. 11, pp. 4424–4436, 2019.
|
[59] |
M. Allaoui, M. L. Kherfi, and A. Cheriet, “Considerably improving clustering algorithms using umap dimensionality reduction technique: A comparative study,” in Proc. Int. Conf. on Image and Signal Processing, 2020, pp. 317–325.
|
[60] |
P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, Y. Bengio et al., “Graph attention networks,” Stat, vol. 1050, no. 20, pp. 10–48550, 2017.
|
[61] |
H.-H. Bock, “On some significance tests in cluster analysis,” Journal of Classification, vol. 2, pp. 77–108, 1985. doi: 10.1007/BF01908065
|
[62] |
C. Kuo, X. Wang, P. B. Walker, O. T. Carmichael, J. Ye, and I. Davidson, “Unified and contrasting cuts in multiple graphs: Application to medical imaging segmentation,” in Proc. the ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, 2015, pp. 617–626.
|