A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 11 Issue 2
Feb.  2024

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
C. Gong and  Y. You,  “Sparse reconstructive evidential clustering for multi-view data,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 459–473, Feb. 2024. doi: 10.1109/JAS.2023.123579
Citation: C. Gong and  Y. You,  “Sparse reconstructive evidential clustering for multi-view data,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 459–473, Feb. 2024. doi: 10.1109/JAS.2023.123579

Sparse Reconstructive Evidential Clustering for Multi-View Data

doi: 10.1109/JAS.2023.123579
Funds:  This work was supported in part by NUS startup grant and the National Natural Science Foundation of China (52076037)
More Information
  • Although many multi-view clustering (MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, these existing algorithms create only the hard and fuzzy partitions for multi-view objects, which are often located in highly-overlapping areas of multi-view feature space. The adoption of hard and fuzzy partition ignores the ambiguity and uncertainty in the assignment of objects, likely leading to performance degradation. To address these issues, we propose a novel sparse reconstructive multi-view evidential clustering algorithm (SRMVEC). Based on a sparse reconstructive procedure, SRMVEC learns a shared affinity matrix across views, and maps multi-view objects to a 2-dimensional human-readable chart by calculating 2 newly defined mathematical metrics for each object. From this chart, users can detect the number of clusters and select several objects existing in the dataset as cluster centers. Then, SRMVEC derives a credal partition under the framework of evidence theory, improving the fault tolerance of clustering. Ablation studies show the benefits of adopting the sparse reconstructive procedure and evidence theory. Besides, SRMVEC delivers effectiveness on benchmark datasets by outperforming some state-of-the-art methods.

     

  • loading
  • [1]
    S. Wang, X. Lin, Z. Fang, S. Du, and G. Xiao, “Contrastive consensus graph learning for multi-view clustering,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 11, pp. 2027–2030, 2022. doi: 10.1109/JAS.2022.105959
    [2]
    C. Gong, Z. Su, P. Wang, and Q. Wang, “Cumulative belief peaks evidential k-nearest neighbor clustering,” Knowl. Based Syst., vol. 200, p. 105982, 2020. doi: 10.1016/j.knosys.2020.105982
    [3]
    C. Gong, Z. Su, P. Wang, and Q. Wang, “An evidential clustering algorithm by finding belief-peaks and disjoint neighborhoods,” Pattern Recognit., vol. 113, p. 107751, 2021. doi: 10.1016/j.patcog.2020.107751
    [4]
    G. Chao, S. Sun, and J. Bi, “A survey on multiview clustering,” IEEE Trans. Artif. Intell., vol. 2, no. 2, pp. 146–168, 2021. doi: 10.1109/TAI.2021.3065894
    [5]
    M. Worchel, R. Diaz, W. Hu, O. Schreer, I. Feldmann, and P. Eisert, “Multi-view mesh reconstruction with neural deferred shading,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, June 2022, pp. 6187–6197.
    [6]
    S. Wang, X. Liu, L. Liu, W. Tu, X. Zhu, J. Liu, S. Zhou, and E. Zhu, “Highly-efficient incomplete large-scale multi-view clustering with consensus bipartite graph,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, 2022, pp. 9776–9785.
    [7]
    V. Mekthanavanh, T. Li, H. Meng, Y. Yang, and J. Hu, “Social web video clustering based on multi-view clustering via nonnegative matrix factorization,” Int. J. Mach. Learn. Cybern., vol. 10, no. 10, pp. 2779–2790, 2019. doi: 10.1007/s13042-018-00902-5
    [8]
    S. Bickel and T. Scheffer, “Multi-view clustering,” in Proc. SIAM Int. Conf. Data Min., vol. 4, no. 2004. Washington, USA: Citeseer, 2004, pp. 19–26.
    [9]
    G. F. Tzortzis and A. C. Likas, “Multiple view clustering using a weighted combination of exemplar-based mixture models,” IEEE Trans. Neural Netw., vol. 21, no. 12, pp. 1925–1938, 2010. doi: 10.1109/TNN.2010.2081999
    [10]
    H. Wu, S. Huang, C. Tang, Y. Zhang, and J. Lv, “Pure graph-guided multi-view subspace clustering,” Pattern Recognit., vol. 136, p. 109187, 2023. doi: 10.1016/j.patcog.2022.109187
    [11]
    S. Wang, Z. Chen, S. Du, and Z. Lin, “Learning deep sparse regularizers with applications to multi-view clustering and semi-supervised classification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 9, pp. 5042–5055, 2021.
    [12]
    G. Zhong and C.-M. Pun, “Self-taught multi-view spectral clustering,” Pattern Recognit., p. 109349, 2023.
    [13]
    S. Shi, F. Nie, R. Wang, and X. Li, “Self-weighting multi-view spectral clustering based on nuclear norm,” Pattern Recognit., vol. 124, p. 108429, 2022. doi: 10.1016/j.patcog.2021.108429
    [14]
    Z. Lin, Z. Kang, L. Zhang, and L. Tian, “Multi-view attributed graph clustering,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 2, pp. 1872–1880, 2023.
    [15]
    L. Li and H. He, “Bipartite graph based multi-view clustering,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 7, pp. 3111–3125, 2022.
    [16]
    C. Zhang, Y. Liu, and H. Fu, “AE2-Nets: Autoencoder in autoencoder networks,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 2577–2585.
    [17]
    M. Brbić and I. Kopriva, “Multi-view low-rank sparse subspace clustering,” Pattern Recognit., vol. 73, pp. 247–258, 2018. doi: 10.1016/j.patcog.2017.08.024
    [18]
    K. Zhou, M. Guo, and M. Jiang, “Evidential weighted multi-view clustering,” in Proc. Int. Conf. Belief Func. Shanghai, China: Springer, 2021, pp. 22–32.
    [19]
    J. Xu, Y. Ren, H. Tang, Z. Yang, L. Pan, Y. Yang, X. Pu, S. Y. Philip, and L. He, “Self-supervised discriminative feature learning for deep multi-view clustering,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 7, pp. 7470–7482, 2023.
    [20]
    H. Zhao, Z. Ding, and Y. Fu, “Multi-view clustering via deep matrix factorization,” in Proc. AAAI Conf. Artificial Intelligence, vol. 31, no. 1, San Francisco, USA, 2017.
    [21]
    Y. Tian, Y. Feng, X. Zhang, and C. Sun, “A fast clustering based evolutionary algorithm for super-large-scale sparse multi-objective optimization,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 4, pp. 1048–1063, 2023.
    [22]
    D. Cai, X. He, and J. Han, “Using graph model for face analysis,” Tech. Rep., 2005.
    [23]
    N. X. Vinh, J. Epps, and J. Bailey, “Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance,” J. Mach. Learn Res., vol. 11, pp. 2837–2854, 2010.
    [24]
    W. J. Krzanowski and Y. Lai, “A criterion for determining the number of groups in a data set using sum-of-squares clustering,” Biometrics, pp. 23–34, 1988.
    [25]
    M. Syakur, B. Khotimah, E. Rochman, and B. D. Satoto, “Integration k-means clustering method and elbow method for identification of the best customer profile cluster,” in Proc. IOP Conf. Ser.: Mater. Sci. Eng., vol. 336, no. 1, 2018, p. 012017.
    [26]
    R. Tibshirani, G. Walther, and T. Hastie, “Estimating the number of clusters in a data set via the gap statistic,” J. R. Stat Soc.: Series B Stat. Methodol, vol. 63, no. 2, pp. 411–423, 2001. doi: 10.1111/1467-9868.00293
    [27]
    M. Radovanovic, A. Nanopoulos, and M. Ivanovic, “Hubs in space: Popular nearest neighbors in high-dimensional data,” J. Mach. Learn Res., vol. 11, no. sept, pp. 2487–2531, 2010.
    [28]
    C. Gong, Y. Li, D. Fu, Y. Liu, P. Wang, and Y. You, “Self-reconstructive evidential clustering for high-dimensional data,” in Proc. Int. Conf. Data Eng. Kuala Lumpur, Malaysia: IEEE, 2022, pp. 2099–2112.
    [29]
    L. Czétány, V. Vámos, M. Horváth, Z. Szalay, A. Mota-Babiloni, Z. Deme-Bélafi, and T. Csoknyai, “Development of electricity consumption profiles of residential buildings based on smart meter data clustering,” Energy Build., vol. 252, p. 111376, 2021. doi: 10.1016/j.enbuild.2021.111376
    [30]
    Y. Wang, Q. Chen, C. Kang, and Q. Xia, “Clustering of electricity consumption behavior dynamics toward big data applications,” IEEE Trans. Smart Grid, vol. 7, no. 5, pp. 2437–2447, 2016. doi: 10.1109/TSG.2016.2548565
    [31]
    C. Gong, Z. Su, P. Wang, Q. Wang, and Y. You, “A sparse reconstructive evidential-nearest neighbor classifier for high-dimensional data,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 6, pp. 5563–5576, 2023.
    [32]
    C. Lu, H. Min, Z. Zhao, L. Zhu, D. Huang, and S. Yan, “Robust and efficient subspace segmentation via least squares regression,” in Proc. Comput Vis ECCV. Florence, Italy: Springer, 2012, pp. 347–360.
    [33]
    T. Denœux and M.-H. Masson, “EVCLUS: Evidential clustering of proximity data,” IEEE Trans. Syst. Man Cybern. Part B Cybern., vol. 34, no. 1, pp. 95–109, 2004. doi: 10.1109/TSMCB.2002.806496
    [34]
    T. Denoeux and O. Kanjanatarakul, “Evidential clustering: A review,” in Proc. Inter. Symp. IUKM. Da Nang, Vietnam: Springer, 2016, pp. 24–35.
    [35]
    G. Shafer, “A mathematical theory of evidence,” in A Mathematical Theory of Evidence. Princeton university press, 1976.
    [36]
    A. P. Dempster, “Upper and lower probabilities induced by a multivalued mapping,” in Classic Works of the Dempster-Shafer Theory of Belief Functions. Springer, 2008, pp. 57–72.
    [37]
    G. Tzortzis and A. Likas, “Convex mixture models for multi-view clustering,” in Proc. Int. Conf. Artif. Neural Netw. Limassol, Cyprus: Springer, 2009, pp. 205–214.
    [38]
    X. Si, Q. Yin, X. Zhao, and L. Yao, “Consistent and diverse multi-view subspace clustering with structure constraint,” Pattern Recognit., vol. 121, p. 108196, 2022. doi: 10.1016/j.patcog.2021.108196
    [39]
    Z. Li, C. Tang, X. Liu, X. Zheng, W. Zhang, and E. Zhu, “Consensus graph learning for multi-view clustering,” IEEE Trans. Multimedia., vol. 24, pp. 2461–2472, 2021.
    [40]
    M.-S. Chen, L. Huang, C.-D. Wang, and D. Huang, “Multi-view clustering in latent embedding space,” in Proc. AAAI Conf. Artificial Intelligence, vol. 34, no. 4, New York, USA, 2020, pp. 3513–3520.
    [41]
    S. Zhou, X. Liu, M. Li, E. Zhu, L. Liu, C. Zhang, and J. Yin, “Multiple kernel clustering with neighborkernel subspace segmentation,” IEEE Trans. Neural Netw. Learn. Syst, vol. 31, no. 4, pp. 1351–1362, 2019.
    [42]
    S. Du, Z. Liu, Z. Chen, W. Yang, and S. Wang, “Differentiable bi-sparse multi-view co-clustering,” IEEE Trans. Signal Process., vol. 69, pp. 4623–4636, 2021. doi: 10.1109/TSP.2021.3101979
    [43]
    Z. Fang, S. Du, X. Lin, J. Yang, S. Wang, and Y. Shi, “DBO-Net: Differentiable bi-level optimization network for multi-view clustering,” Inf. Sci., 2023.
    [44]
    X. Liu, L. Liu, Q. Liao, S. Wang, Y. Zhang, W. Tu, C. Tang, J. Liu, and E. Zhu, “One pass late fusion multi-view clustering,” in Proc. Int. Conf. Mach. Learn. PMLR, 2021, pp. 6850–6859.
    [45]
    D. Pelleg, A. W. Moore et al., “X-means: Extending k-means with efficient estimation of the number of clusters,” in Proc. Int. Conf. Mach. Learn, vol. 1, Stanford University, USA, 2000, pp. 727–734.
    [46]
    J. Wang, “Consistent selection of the number of clusters via crossvalidation,” Biometrika, vol. 97, no. 4, pp. 893–904, 2010. doi: 10.1093/biomet/asq061
    [47]
    S. Sarkar and A. K. Ghosh, “On perfect clustering of high dimension, low sample size data,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 9, pp. 2257–2272, 2019.
    [48]
    O. Basir and X. Yuan, “Engine fault diagnosis based on multi-sensor information fusion using dempster—Shafer evidence theory,” Inf. Fusion, vol. 8, no. 4, pp. 379–386, 2007. doi: 10.1016/j.inffus.2005.07.003
    [49]
    I. Bloch, “Some aspects of dempster-shafer evidence theory for classification of multi-modality medical images taking partial volume effect into account,” Pattern Recognit. Lett., vol. 17, no. 8, pp. 905–919, 1996. doi: 10.1016/0167-8655(96)00039-6
    [50]
    M. Kantardzic, C. Walgampaya, R. Yampolskiy, and R. J. Woo, “Click fraud prevention via multimodal evidence fusion by dempster-shafer theory,” in Proc. Conf. Multisensor Fus. Integ. Salt Lake City, USA: IEEE, 2010, pp. 26–31.
    [51]
    Y.-T. Liu, N. R. Pal, A. R. Marathe, and C.-T. Lin, “Weighted fuzzy dempster—Shafer framework for multimodal information integration,” IEEE Trans. Fuzzy Syst., vol. 26, no. 1, pp. 338–352, 2017.
    [52]
    Z. Han, C. Zhang, H. Fu, and J. T. Zhou, “Trusted multi-view classification with dynamic evidential fusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 2, pp. 2551–2566, 2023.
    [53]
    Y. Qin, D. Peng, X. Peng, X. Wang, and P. Hu, “Deep evidential learning with noisy correspondence for cross-modal retrieval,” in Proc. ACM Int. Conf. Multimedia, Lisbon, Portugal, 2022, pp. 4948–4956.
    [54]
    C. Gong, P. Wang, and Z. Su, “An interactive nonparametric evidential regression algorithm with instance selection,” Soft Comput., vol. 24, pp. 3125–3140, 2020.
    [55]
    L.-Q. Huang, Z.-G. Liu, and J. Dezert, “Cross-domain pattern classification with distribution adaptation based on evidence theory,” IEEE Trans. Cybernetics, vol. 53, no. 2, pp. 718–731, 2023.
    [56]
    C. Gong, Z. Su, P. Wang, Q. Wang, and Y. You, “Evidential instance selection for k-nearest neighbor classification of big data,” Int. J. Approx Reason., vol. 138, pp. 123–144, 2021.
    [57]
    C. Gong, Y. Li, Y. Liu, P. Wang, and Y. You, “Joint evidential k-nearest neighbor classification,” in Proc. Int. Conf. Data Eng. Kuala Lumpur, Malaysia: IEEE, 2022, pp. 2113–2126.
    [58]
    S. Shi, F. Nie, R. Wang, and X. Li, “Multi-view clustering via nonnegative and orthogonal graph reconstruction,” IEEE Trans. Neural Netw. Learn. Syst, vol. 34, no. 1, pp. 201–214, 2023.
    [59]
    Q. Wang, Z. Tao, Q. Gao, and L. Jiao, “Multi-view subspace clustering via structured multi-pathway network,” IEEE Trans. Neural Netw. Learn. Syst, 2022. DOI: 10.1109/TNNLS.2022.3213374
    [60]
    J. Xu, H. Tang, Y. Ren, L. Peng, X. Zhu, and L. He, “Multi-level feature learning for contrastive multi-view clustering,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, 2022, pp. 16 051–16 060.
    [61]
    D. Cai, X. He, and J. Han, “Document clustering using locality preserving indexing,” IEEE Trans. Knowl. Data Eng., vol. 17, no. 12, pp. 1624–1637, 2005. doi: 10.1109/TKDE.2005.198
    [62]
    Z. Huang, Y. Ren, X. Pu, L. Pan, D. Yao, and G. Yu, “Dual self-paced multi-view clustering,” Neural Netw, vol. 140, pp. 184–192, 2021.
    [63]
    Q. Yin, S. Wu, R. He, and L. Wang, “Multi-view clustering via pairwise sparse subspace representation,” Neurocomputing, vol. 156, pp. 12–21, 2017.
    [64]
    P. Zhu, B. Hui, C. Zhang, D. Du, L. Wen, and Q. Hu, “Multi-view deep subspace clustering networks,” arXiv preprint arXiv: 1908.01978, 2019.
    [65]
    J. Xu, Y. Ren, G. Li, L. Pan, C. Zhu, and Z. Xu, “Deep embedded multi-view clustering with collaborative training,” Inf. Sci., vol. 573, pp. 279–290, 2021.
    [66]
    Y. Xie, B. Lin, Y. Qu, C. Li, W. Zhang, L. Ma, Y. Wen, and D. Tao, “Joint deep multi-view learning for image clustering,” IEEE Trans. Knowl. Data Eng., vol. 33, no. 11, pp. 3594–3606, 2020.
    [67]
    Y. Ren, S. Huang, P. Zhao, M. Han, and Z. Xu, “Self-paced and auto-weighted multi-view clustering,” Neurocomputing, vol. 383, pp. 248–256, 2020.
    [68]
    C. Zhang, H. Fu, Q. Hu, X. Cao, Y. Xie, D. Tao, and D. Xu, “Generalized latent multi-view subspace clustering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 1, pp. 86–99, 2020.
    [69]
    Z. Kang, H. Pan, S. C. Hoi, and Z. Xu, “Robust graph learning from noisy data,” IEEE Trans. Cybernetics, vol. 50, no. 5, pp. 1833–1843, 2019.
    [70]
    L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn Res., vol. 9, no. 11, 2008.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(8)

    Article Metrics

    Article views (147) PDF downloads(25) Cited by()

    Highlights

    • Estimate the cluster number for a multi-view clustering problem
    • Identify the cluster center in each cluster of multi-view data
    • Use a more fine-grained partition to cluster multi-view data

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return