A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 11 Issue 6
Jun.  2024

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 4% (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
S. Qi, R. Wang, T. Zhang, X. Yang, R. Sun, and  L. Wang,  “A two-layer encoding learning swarm optimizer based on frequent itemsets for sparse large-scale multi-objective optimization,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 6, pp. 1342–1357, Jun. 2024. doi: 10.1109/JAS.2024.124341
Citation: S. Qi, R. Wang, T. Zhang, X. Yang, R. Sun, and  L. Wang,  “A two-layer encoding learning swarm optimizer based on frequent itemsets for sparse large-scale multi-objective optimization,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 6, pp. 1342–1357, Jun. 2024. doi: 10.1109/JAS.2024.124341

A Two-Layer Encoding Learning Swarm Optimizer Based on Frequent Itemsets for Sparse Large-Scale Multi-Objective Optimization

doi: 10.1109/JAS.2024.124341
Funds:  This work was supported by the Scientific Research Project of Xiang Jiang Lab (22XJ02003), the University Fundamental Research Fund (23-ZZCX-JDZ-28), the National Science Fund for Outstanding Young Scholars (62122093), the National Natural Science Foundation of China (72071205), the Hunan Graduate Research Innovation Project (ZC23112101-10), and the Hunan Natural Science Foundation Regional Joint Project (2023JJ50490), the Science and Technology Project for Young and Middle-aged Talents of Hunan (2023TJ-Z03), the Science and Technology Innovation Program of Humnan Province (2023RC1002)
More Information
  • Traditional large-scale multi-objective optimization algorithms (LSMOEAs) encounter difficulties when dealing with sparse large-scale multi-objective optimization problems (SLMOPs) where most decision variables are zero. As a result, many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately. Nevertheless, existing optimizers often focus on locating non-zero variable positions to optimize the binary variables Mask. However, approximating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized. In data mining, it is common to mine frequent itemsets appearing together in a dataset to reveal the correlation between data. Inspired by this, we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets (TELSO) to address these SLMOPs. TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence. Experimental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms (SLMOEAs) in terms of performance and convergence speed.

     

  • loading
  • [1]
    K. Qiao, J. Liang, Z. Liu, K. Yu, C. Yue, and B. Qu, “Evolutionary multitasking with global and local auxiliary tasks for constrained multi-objective optimization,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 10, pp. 1951–1964, Oct. 2023. doi: 10.1109/JAS.2023.123336
    [2]
    X. Wang, L. Liu, L. Duan, and Q. Liao, “Multi-objective optimization for an industrial grinding and classification process based on PBM and RSM,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 11, pp. 2124–2135, Nov. 2023. doi: 10.1109/JAS.2023.123333
    [3]
    W. Li, X. Yao, K. Li, R. Wang, T. Zhang, and L. Wang, “Coevolutionary framework for generalized multimodal multi-objective optimization,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 7, pp. 1544–1556, Jul. 2023. doi: 10.1109/JAS.2023.123609
    [4]
    W. Liu, R. Wang, T. Zhang, K. Li, W. Li, H. Ishibuchi, and X. Liao, “Hybridization of evolutionary algorithm and deep reinforcement learning for multiobjective orienteering optimization,” IEEE Trans. Evol Comput., vol. 27, no. 5, pp. 1260–1274, Oct. 2023. doi: 10.1109/TEVC.2022.3199045
    [5]
    S. Qi, J. Zou, S. Yang, and J. Zheng, “A level-based multi-strategy learning swarm optimizer for large-scale multi-objective optimization,” Swarm Evol. Comput., vol. 73, p. 101100, Aug. 2022. doi: 10.1016/j.swevo.2022.101100
    [6]
    R. Bellman, “Dynamic programming,” Science, vol. 153, no. 3731, pp. 34–37, Jul. 1966. doi: 10.1126/science.153.3731.34
    [7]
    X. Zhang, Y. Tian, R. Cheng, and Y. Jin, “A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 1, pp. 97–112, Feb. 2018. doi: 10.1109/TEVC.2016.2600642
    [8]
    Y. Tian, X. Zheng, X. Zhang, and Y. Jin, “Efficient large-scale multiobjective optimization based on a competitive swarm optimizer,” IEEE Trans. Cybern., vol. 50, no. 8, pp. 3696–3708, Aug. 2020. doi: 10.1109/TCYB.2019.2906383
    [9]
    F. Ming, W. Gong, L. Wang, and L. Gao, “Constrained multiobjective optimization via multitasking and knowledge transfer,” IEEE Trans. Evol. Comput., vol. 28, no. 1, pp. 77–89, Feb. 2024. doi: 10.1109/TEVC.2022.3230822
    [10]
    X. Ma, F. Liu, Y. Qi, X. Wang, L. Li, L. Jiao, M. Yin, and M. Gong, “A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables,” IEEE Trans. Evol. Comput., vol. 20, no. 2, pp. 275–298, Apr. 2016. doi: 10.1109/TEVC.2015.2455812
    [11]
    H. Zille, H. Ishibuchi, S. Mostaghim, and Y. Nojima, “A framework for large-scale multiobjective optimization based on problem transformation,” IEEE Trans. Evol. Comput., vol. 22, no. 2, pp. 260–275, Apr. 2018. doi: 10.1109/TEVC.2017.2704782
    [12]
    R. Lü, X. Guan, X. Li, and I. Hwang, “A large-scale flight multi-objective assignment approach based on multi-island parallel evolution algorithm with cooperative coevolutionary,” Sci. China Inf. Sci., vol. 59, no. 7, p. 072201, Jul. 2016. doi: 10.1007/s11432-015-5495-3
    [13]
    X. Guan, X. Zhang, R. Lv, J. Chen, and W. Michal, “A large-scale multi-objective flights conflict avoidance approach supporting 4D trajectory operation,” Sci. China Inf. Sci., vol. 60, no. 11, p. 112202, Nov. 2017. doi: 10.1007/s11432-016-9024-y
    [14]
    S. Qi, J. Zou, S. Yang, Y. Jin, J. Zheng, and X. Yang, “A self-exploratory competitive swarm optimization algorithm for large-scale multiobjective optimization,” Inf. Sci., vol. 609, pp. 1601–1620, Sep. 2022. doi: 10.1016/j.ins.2022.07.110
    [15]
    S. Liu, H. Wang, W. Peng, and W. Yao, “A surrogate-assisted evolutionary feature selection algorithm with parallel random grouping for high-dimensional classification,” IEEE Trans. Evol. Comput., vol. 26, no. 5, pp. 1087–1101, Oct. 2022. doi: 10.1109/TEVC.2022.3149601
    [16]
    Y. Jin, T. Okabe, and B. Sendhoff, “Neural network regularization and ensembling using multi-objective evolutionary algorithms,” in Proc. Congr. Evolutionary Computation, Portland, USA, 2004, pp. 1–8.
    [17]
    Y. Tian, X. Zhang, C. Wang, and Y. Jin, “An evolutionary algorithm for large-scale sparse multiobjective optimization problems,” IEEE Trans. Evol. Comput., vol. 24, no. 2, pp. 380–393, Apr. 2020. doi: 10.1109/TEVC.2019.2918140
    [18]
    A. Ponsich, A. L. Jaimes, and C. A. C. Coello, “A survey on multiobjective evolutionary algorithms for the solution of the portfolio optimization problem and other finance and economics applications,” IEEE Trans. Evol. Comput., vol. 17, no. 3, pp. 321–344, Jun. 2013. doi: 10.1109/TEVC.2012.2196800
    [19]
    Y. Zou, Y. Liu, J. Zou, S. Yang, and J. Zheng, “An evolutionary algorithm based on dynamic sparse grouping for sparse large scale multiobjective optimization,” Inf. Sci., vol. 631, pp. 449–467, Jun. 2023. doi: 10.1016/j.ins.2023.02.062
    [20]
    Y. Tian, C. Lu, X. Zhang, K. C. Tan, and Y. Jin, “Solving large-scale multiobjective optimization problems with sparse optimal solutions via unsupervised neural networks,” IEEE Trans. Cybern., vol. 51, no. 6, pp. 3115–3128, Jun. 2021. doi: 10.1109/TCYB.2020.2979930
    [21]
    Y. Tian, C. Lu, X. Zhang, F. Cheng, and Y. Jin, “A pattern mining-based evolutionary algorithm for large-scale sparse multiobjective optimization problems,” IEEE Trans. Cybern., vol. 52, no. 7, pp. 6784–6797, Jul. 2022. doi: 10.1109/TCYB.2020.3041325
    [22]
    Y. Tian, Y. Feng, X. Zhang, and C. Sun, “A fast clustering based evolutionary algorithm for super-large-scale sparse multi-objective optimization,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 4, pp. 1048–1063, Apr. 2023. doi: 10.1109/JAS.2022.105437
    [23]
    X. Wang, K. Zhang, J. Wang, and Y. Jin, “An enhanced competitive swarm optimizer with strongly convex sparse operator for large-scale multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 26, no. 5, pp. 859–871, Oct. 2022. doi: 10.1109/TEVC.2021.3111209
    [24]
    Z. Tan and H. Wang, “A kriging-assisted evolutionary algorithm using feature selection for expensive sparse multi-objective optimization,” in Proc. IEEE Congr. Evolutionary Computation, Glasgow, UK, 2020, pp. 1–8.
    [25]
    Z. Tan, H. Wang, and S. Liu, “Multi-stage dimension reduction for expensive sparse multi-objective optimization problems,” Neurocompu ting, vol. 440, pp. 159–174, Jun. 2021. doi: 10.1016/j.neucom.2021.01.115
    [26]
    I. Kropp, A. P. Nejadhashemi, and K. Deb, “Benefits of sparse population sampling in multi-objective evolutionary computing for large-scale sparse optimization problems,” Swarm Evol. Comput., vol. 69, p. 101025, Mar. 2022. doi: 10.1016/j.swevo.2021.101025
    [27]
    I. Kropp, A. P. Nejadhashemi, and K. Deb, “Improved evolutionary operators for sparse large-scale multiobjective optimization problems,” IEEE Trans. Evol. Comput., p. , 2023. doi: 10.1109/TEVC.2023.3256183
    [28]
    R. Agrawal and R. Srikant, “Fast algorithms for mining association rules,” in Proc. 20th VLDB Conf., Santiago, Chile, 1994, pp. 487–499.
    [29]
    K. Li, T. Zhang, and R. Wang, “Deep reinforcement learning for multiobjective optimization,” IEEE Trans. Cybern., vol. 51, no. 6, pp. 3103–3114, Jun. 2021. doi: 10.1109/TCYB.2020.2977661
    [30]
    J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proc. Int. Conf. Neural Networks, Perth, Australia, 1995, pp. 1942–1948.
    [31]
    J. Zhao, C. Han, and B. Wei, “Binary particle swarm optimization with multiple evolutionary strategies,” Sci. China Inf. Sci., vol. 55, no. 11, pp. 2485–2494, Nov. 2012. doi: 10.1007/s11432-011-4418-1
    [32]
    C. Yue, B. Qu, and J. Liang, “A multiobjective particle swarm optimizer using ring topology for solving multimodal multiobjective problems,” IEEE Trans. Evol. Comput., vol. 22, no. 5, pp. 805–817, Oct. 2018. doi: 10.1109/TEVC.2017.2754271
    [33]
    J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Trans. Evol. Comput., vol. 10, no. 3, pp. 281–295, Jun. 2006. doi: 10.1109/TEVC.2005.857610
    [34]
    R. Cheng and Y. Jin, “A social learning particle swarm optimization algorithm for scalable optimization,” Inf. Sci., vol. 291, pp. 43–60, Jan. 2015. doi: 10.1016/j.ins.2014.08.039
    [35]
    P. Mohapatra, K. N. Das, and S. Roy, “A modified competitive swarm optimizer for large scale optimization problems,” Appl. Soft Comput., vol. 59, pp. 340–362, Oct. 2017. doi: 10.1016/j.asoc.2017.05.060
    [36]
    Q. Yang, W.-N. Chen, J. Da Deng, Y. Li, T. Gu, and J. Zhang, “A level-based learning swarm optimizer for large-scale optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 4, pp. 578–594, Aug. 2018. doi: 10.1109/TEVC.2017.2743016
    [37]
    F. Ming, W. Gong, D. Li, L. Wang, and L. Gao, “A competitive and cooperative swarm optimizer for constrained multiobjective optimization problems,” IEEE Trans. Evol. Comput., vol. 27, no. 5, pp. 1313–1326, Oct. 2023. doi: 10.1109/TEVC.2022.3199775
    [38]
    R. Lan, Y. Zhu, H. Lu, Z. Liu, and X. Luo, “A two-phase learning-based swarm optimizer for large-scale optimization,” IEEE Trans. Cybern., vol. 51, no. 12, pp. 6284–6293, Dec. 2021. doi: 10.1109/TCYB.2020.2968400
    [39]
    X. Zhang, X. Zheng, R. Cheng, J. Qiu, and Y. Jin, “A competitive mechanism based multi-objective particle swarm optimizer with fast convergence,” Inf. Sci., vol. 427, pp. 63–76, Feb. 2018. doi: 10.1016/j.ins.2017.10.037
    [40]
    S. Qi, R. Wang, T. Zhang, and N. Dong, “Cooperative coevolutionary competition swarm optimizer with perturbation for high-dimensional multi-objective optimization,” Inf. Sci., vol. 644, p. 119253, Oct. 2023. doi: 10.1016/j.ins.2023.119253
    [41]
    C. Borgelt, “Frequent item set mining,” WIREs Data Min. Knowl. Discovery, vol. 2, no. 6, pp. 437–456, Nov.–Dec. 2012. doi: 10.1002/widm.1074
    [42]
    S.-Y. Wu and E. Yen, “Data mining-based intrusion detectors,” Exp. Syst. Appl., vol. 36, no. 3, pp. 5605–5612, Apr. 2009. doi: 10.1016/j.eswa.2008.06.138
    [43]
    H. Liu and H. Motoda, Feature Selection For Knowledge Discovery and Data Mining. New York, USA: Springer, 2012.
    [44]
    V. Hodge and J. Austin, “A survey of outlier detection methodologies,” Artif. Intell. Rev., vol. 22, no. 2, pp. 85–126, Oct. 2004. doi: 10.1023/B:AIRE.0000045502.10941.a9
    [45]
    M. Yuan and Y. Lin, “Model selection and estimation in the Gaussian graphical model,” Biometrika, vol. 94, no. 1, pp. 19–35, Mar. 2007. doi: 10.1093/biomet/asm018
    [46]
    R. Tibshirani, “Regression shrinkage and selection via the LASSO,” J. Roy. Stat. Soc. Ser. B Methodol., vol. 58, no. 1, pp. 267–288, Jan. 1996.
    [47]
    M. Li, S. Yang, and X. Liu, “Shift-based density estimation for Pareto-based algorithms in many-objective optimization,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 348–365, Jun. 2014. doi: 10.1109/TEVC.2013.2262178
    [48]
    R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vector guided evolutionary algorithm for many-objective optimization,” IEEE Trans. Evol. Comput., vol. 20, no. 5, pp. 773–791, Oct. 2016. doi: 10.1109/TEVC.2016.2519378
    [49]
    R. Cheng and Y. Jin, “A competitive swarm optimizer for large scale optimization,” IEEE Trans. Cybern., vol. 45, no. 2, pp. 191–204, Feb. 2015. doi: 10.1109/TCYB.2014.2322602
    [50]
    Y. Zhang, Y. Tian, and X. Zhang, “Improved SparseEA for sparse large-scale multi-objective optimization problems,” Complex Intell. Syst., vol. 9, no. 2, pp. 1127–1142, Apr. 2023. doi: 10.1007/s40747-021-00553-0
    [51]
    Y. Tian, R. Cheng, X. Zhang, and Y. Jin, “PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum],” IEEE Comput. Intell. Mag., vol. 12, no. 4, pp. 73–87, Nov. 2017. doi: 10.1109/MCI.2017.2742868
    [52]
    P. A. N. Bosman and D. Thierens, “The balance between proximity and diversity in multiobjective evolutionary algorithms,” IEEE Trans. Evol. Comput., vol. 7, no. 2, pp. 174–188, Apr. 2003. doi: 10.1109/TEVC.2003.810761
    [53]
    Y. Tian, X. Xiang, X. Zhang, R. Cheng, and Y. Jin, “Sampling reference points on the Pareto fronts of benchmark multi-objective optimization problems,” in Proc. IEEE Congr. Evolutionary Computation, Rio de Janeiro, Brazil, 2018, pp. 1–6.
    [54]
    W. Haynes, “Wilcoxon rank sum test,” in Encyclopedia of Systems Biology, W. Dubitzky, O. Wolkenhauer, K.-H. Cho, and H. Yokota, Eds. New York, USA: Springer, 2013, pp. 2354–2355.
    [55]
    J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm Evol. Comput., vol. 1, no. 1, pp. 3–18, Mar. 2011. doi: 10.1016/j.swevo.2011.02.002
    [56]
    B. Wu, T. Yuan, Y. Qi, and M. Dong, “Public opinion dissemination with incomplete information on social network: A study based on the infectious diseases model and game theory,” Complex Syst. Model. Simul., vol. 1, no. 2, pp. 109–121, Jun. 2021. doi: 10.23919/CSMS.2021.0008
    [57]
    J. Liang, M. Gong, H. Li, C. Yue, and B. Qu, “Problem definitions and evaluation criteria for the CEC special session on evolutionary algorithms for sparse optimization,” Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China, 2018001, 2018.
    [58]
    B. Xue, M. Zhang, and W. N. Browne, “Particle swarm optimization for feature selection in classification: A multi-objective approach,” IEEE Trans. Cybern., vol. 43, no. 6, pp. 1656–1671, Dec. 2013. doi: 10.1109/TSMCB.2012.2227469
    [59]
    Y. Su, Z. Jin, Y. Tian, X. Zhang, and K. C. Tan, “Comparing the performance of evolutionary algorithms for sparse multi-objective optimization via a comprehensive indicator [research frontier],” IEEE Comput. Intell. Mag., vol. 17, no. 3, pp. 34–53, Apr. 2022. doi: 10.1109/MCI.2022.3180913
    [60]
    E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach,” IEEE Trans. Evol. Comput., vol. 3, no. 4, pp. 257–271, Nov. 1999. doi: 10.1109/4235.797969
    [61]
    J. Bader, K. Deb, and E. Zitzler, “Faster hypervolume-based search using Monte Carlo sampling,” in Proc. 19th Int. Conf. Multiple Criteria Decision Making, Auckland, New Zealand, 2008, pp. 313–326.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(6)

    Article Metrics

    Article views (145) PDF downloads(39) Cited by()

    Highlights

    • We discussed the limitations of the prevailing two-layer encoding SLMOEAs. Most such algorithms strive to emulate the sparse distributions akin to real Pareto-optimal solutions but overlook the change in objective function value
    • We propose an innovative masking learning (ML) strategy based on the frequent itemsets. ML selects a set of outstanding particles for each particle, gathering potentially advantageous information fragments from distinct dimensions. In contrast to conventional methodologies that focus on detection the sparsity of individual decision variables to approximate the sparse distribution of the real PS, ML updates the mask with the goal of generating solutions with superior objective values
    • We propose a dynamic mutation strategy for binary variables suitable for ML. ML adjusts the mutation probability of masks according to different stages of evolution. This helps particles avoid premature convergence due to wrong mask combinations in the early stages. It also helps particles find optimal mask combinations quickly in later stages
    • This paper presents a swarm optimizer that uses two-layer encoding learning based on frequent itemsets. The real variables learn and optimize synchronously with the corresponding mask to achieve better objective values

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return