A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
X. Wang, Q. Kang, M. C. Zhou, Q. Deng, Z. Fan, and H. Liu, “Knowledge classification-assisted evolutionary multitasking for two-task multiobjective optimization problems,” IEEE/CAA J. Autom. Sinica, 2025. doi: 10.1109/JAS.2024.125070
Citation: X. Wang, Q. Kang, M. C. Zhou, Q. Deng, Z. Fan, and H. Liu, “Knowledge classification-assisted evolutionary multitasking for two-task multiobjective optimization problems,” IEEE/CAA J. Autom. Sinica, 2025. doi: 10.1109/JAS.2024.125070

Knowledge Classification-Assisted Evolutionary Multitasking for Two-Task Multiobjective Optimization Problems

doi: 10.1109/JAS.2024.125070
Funds:  This work was supported in part by the National Natural Science Foundation of China (51775385), the Natural Science Foundation of Shanghai (23ZR1466000), the Shanghai Industrial Collaborative Science and Technology Innovation Project (2021-cyxt2-kj10), and the Innovation Program of Shanghai Municipal Education Commission (202101070007E00098)
More Information
  • To realize Industry 5.0, manufacturers face various optimization problems that seldom appear in isolation. Evolutionary MultiTasking (EMT) is an effective method to solve multiple related problems by extracting and utilizing common knowledge. Knowledge transfer is the key to the effectiveness of EMT. Existing EMT methods mainly focus on designing effective inter-task learning methods and ignore the fact that provided knowledge’s appropriateness also has a significant effect on EMT’s performance. There is plentiful knowledge in assistant tasks, and knowledge transfer may not work well and even lead to a negative effect if useless knowledge is selected to guide target tasks. EMT is thus confronted with a challenge to find appropriate knowledge. This work proposes an efficient knowledge classification-assisted EMT framework to identify and select valuable knowledge from assistant tasks. During the evolution process, better-performing candidates are supposed to have advantages in exploitation. Therefore, assistant individuals that are similar to better-performing target individuals are used to provide positive knowledge. Specifically, the target sub-population is divided into different levels and then a classifier is trained to divide assistant sub-population. Considering that target and assistant sub-populations have different characteristics, we use domain adaptation to reduce their distribution discrepancies. In this way, the trained classifier can classify assistant individuals more accurately, and truly useful knowledge can be selected for target tasks. The superior performance of our proposed framework over state-of-the-art algorithms is verified via a series of benchmark problems.

     

  • loading
  • [1]
    Z. Lv, L. Wang, Z. Han, J. Zhao, and W. Wang, “Surrogate-assisted particle swarm optimization algorithm with pareto active learning for expensive multi-objective optimization,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 3, pp. 838–849, May. 2019. doi: 10.1109/JAS.2019.1911450
    [2]
    L. Zhang, Q. Kang, Q. Deng, L. Xu, and Q. Wu, “A Line complex-based evolutionary algorithm for many-objective optimization,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 5, pp. 1150–1167, May. 2023. doi: 10.1109/JAS.2023.123495
    [3]
    Q. Kang, X. Y. Song, M. C. Zhou, and L. Li, “A collaborative resource allocation strategy for decomposition-based multi-objective evolutionary algorithms,” IEEE Trans. Syst. , Man, and Cybern. Syst., vol. 49, no. 12, pp. 2416–2423, Dec. 2019. doi: 10.1109/TSMC.2018.2818175
    [4]
    Q. Deng, Q. Kang, L. Zhang, M. Zhou, and J. An, “Objective space-based population generation to accelerate evolutionary algorithms for large-scale many-objective optimization,” IEEE Trans. Evol. Comput., vol. 27, no. 2, pp. 326–340, Apr. 2023. doi: 10.1109/TEVC.2022.3166815
    [5]
    L. Zhang, K. Wang, L. Xu, W. Sheng, and Q. Kang, “Evolving ensembles using multi-objective genetic programming for imbalanced classification,” Knowledge-based Systems, vol. 255, pp. 1–17, Nov. 2022.
    [6]
    Y. Wang, S. Gao, M. Zhou, and Y. Yu, “A multi-layered gravitational search algorithm for function optimization and real-world problems,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 1, pp. 94–109, Jan. 2021. doi: 10.1109/JAS.2020.1003462
    [7]
    Q. Yang, W. -N. Chen, J. D. Deng, Y. Li, T. Gu, and J. Zhang, “A level-based learning swarm optimizer for large-scale optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 4, pp. 578–594, Aug. 2018.
    [8]
    Y. Fu, M. Zhou, X. Guo and L. Qi, “Scheduling dual-objective stochastic hybrid flow shop with deteriorating jobs via bi-population evolutionary algorithm,” IEEE Trans. Syst. , Man, and Cybern. Syst., vol. 50, no. 12, pp. 5037–5048, Dec. 2020. doi: 10.1109/TSMC.2019.2907575
    [9]
    C. Liu, J. Wang, M. Zhou and T. Zhou, “Intelligent optimization approach to cell formation and product scheduling for multifactory cellular manufacturing systems considering supply chain and operational error,” IEEE Trans. Syst. , Man, and Cybern. Syst., vol. 53, no. 8, pp. 4649–4660, Aug. 2023. doi: 10.1109/TSMC.2023.3253471
    [10]
    M. Zhou, Y. Qiao, B. Liu, B. Vogel-Heuser and H. Kim, “Machine learning for Industry 4.0,” IEEE Robotics & Automation Magazine, vol. 30, no. 2, pp. 8–9, Jun. 2023.
    [11]
    X. Guo, M. Zhou, A. Abusorrah, F. Alsokhiry, and K. Sedraoui, “Disassembly sequence planning: A survey,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 7, pp. 1308–1324, Jul. 2021. doi: 10.1109/JAS.2020.1003515
    [12]
    Y. Xiang, Y. Zhou, M. Li, and Z. Chen, “A vector angle-based evolutionary algorithm for unconstrained many-objective problems,” IEEE Trans. Evol. Comput., vol. 21, no. 1, pp. 131–152, Feb. 2017. doi: 10.1109/TEVC.2016.2587808
    [13]
    K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints,” IEEE Trans. Evol. Comput., vol. 18, no. 4, pp. 577–601, Aug. 2014. doi: 10.1109/TEVC.2013.2281535
    [14]
    A. Gupta, Y. -S. Ong, and L. Feng, “Multifactorial evolution: Toward evolutionary multitasking,” IEEE Trans. Evol. Comput. , vol. 20, no. 3, pp. 343–357, Jun. 2016.
    [15]
    Z. Chen, Y. Zhou, X. He, and J. Zhang, “Learning task relationships in evolutionary multitasking for multiobjective continuous optimization,” IEEE Trans. Cybern., vol. 52, no. 6, pp. 5278–5289, Jun. 2022. doi: 10.1109/TCYB.2020.3029176
    [16]
    M.-Y. Cheng, A. Gupta, Y.-S. Ong, and Z.-W. Ni, “Coevolutionary multitasking for concurrent global optimization: With case studies in complex engineering design,” Eng. Appl. Artif. Intell., vol. 64, no. 3, pp. 13–24, 2017.
    [17]
    J. Yi, J. Bai, H. He, W. Zhou, and L. Yao, “A multifactorial evolutionary algorithm for multitasking under interval uncertainties,” IEEE Trans. Evol. Comput., vol. 24, no. 5, pp. 908–922, Oct. 2020. doi: 10.1109/TEVC.2020.2975381
    [18]
    X. Wang, Q. Kang, M. Zhou, S. Yao, and A. Abusorrah, “Domain adaptation multitask optimization,” IEEE Trans. Cybern., vol. 53, no. 7, pp. 4567–4578, Jul. 2023. doi: 10.1109/TCYB.2022.3222101
    [19]
    K. K. Bali, A. Gupta, L. Feng, Y. S. Ong, and T. P. Siew, “Linearized domain adaptation in evolutionary multitasking,” in Proc. IEEE Congr. Evol. Comput., Spain, 2017, pp. 1295–1302.
    [20]
    Z. Tang, M. Gong, Y. Wu, et. al., “Regularized evolutionary multitask optimization: Learning to intertask transfer in aligned subspace,” IEEE Trans. Evol. Comput. , vol. 25, no. 2, pp. 262–276, Apr. 2021.
    [21]
    H. Han, W. Lu, L. Zhang, and J. Qiao, “Adaptive gradient multiobjective particle swarm optimization,” IEEE Trans. Cybern., vol. 48, no. 11, pp. 3067–3079, Nov. 2018. doi: 10.1109/TCYB.2017.2756874
    [22]
    S. Yao, Q. Kang, M. Zhou, M. Rawa, and A. Albeshri, “Discriminative manifold distribution alignment for domain adaptation,” IEEE Trans. Syst. , Man, Cybern. , Syst., vol. 53, no. 2, pp. 1183–1197, Feb. 2023. doi: 10.1109/TSMC.2022.3195239
    [23]
    S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation via transfer component analysis,” IEEE Trans. Neural. Netw., vol. 22, no. 2, pp. 199–210, Feb. 2011. doi: 10.1109/TNN.2010.2091281
    [24]
    A. Gupta, Y.-S. Ong, L. Feng, and K. C. Tan, “Multiobjective multifactorial optimization in evolutionary multitasking,” IEEE Trans. Cybern. , vol. 47, no. 7, pp. 1652–1665, Jul. 2017.
    [25]
    K. K. Bali, Y.-S. Ong, A. Gupta, and P. S. Tan, “Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II,” IEEE Trans. Evol. Comput., vol. 24, no. 1, pp. 69–83, Feb. 2020.
    [26]
    K. K. Bali, A. Gupta, Y.-S. Ong, and P. S. Tan, “Cognizant multitasking in multiobjective multifactorial evolution: MO-MFEA-II,” IEEE Trans. Cybern. , vol. 51, no. 4, pp. 1784–1796, Apr. 2021.
    [27]
    Z. Liang et al., “Multiobjective evolutionary multitasking with two-stage adaptive knowledge transfer based on population distribution,” IEEE Trans. Syst. , Man, Cybern. , Syst., vol. 52, no. 7, pp. 4457–4469, Jul. 2022. doi: 10.1109/TSMC.2021.3096220
    [28]
    C. Yang, J. Ding, K. C. Tan, and Y. Jin, “Two-stage assortative mating for multiobjective multifactorial evolutionary optimization,” in Proc. IEEE Annu. Conf. Decis. Control, Dec. 2017, pp. 76–81.
    [29]
    Z. Liang, Y. Zhu, X. Wang, Z. Li, and Z. Zhu, “Evolutionary multitasking for multi-objective optimization based on generative strategies,” IEEE Trans. Evol. Comput., vol. 27, no. 4, pp. 1042–1056, Aug. 2023. doi: 10.1109/TEVC.2022.3189029
    [30]
    K. Qiao, J. Liang, Z. Liu, K. Yu, C. Yue, and B. Qu, “Evolutionary multitasking with global and local auxiliary tasks for constrained multi-objective optimization,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 10, pp. 1951–1964, Oct. 2023 doi: 10.1109/JAS.2023.123336
    [31]
    L. Feng et al., “Explicit evolutionary multitasking for combinatorial optimization: A case study on capacitated vehicle routing problem,” IEEE Trans. Cybern., vol. 51, no. 6, pp. 3143–3156, Jun. 2021. doi: 10.1109/TCYB.2019.2962865
    [32]
    J. Lin, H.-L. Liu, B. Xue, M. Zhang, and F. Gu, “Multiobjective multitasking optimization based on incremental learning,” IEEE Trans. Cybern., vol. 24, no. 5, pp. 824–838, Oct. 2020.
    [33]
    G. Yokoya, H. Xiao, and T. Hatanaka, “Multifactorial optimization using artificial bee colony and its application to car structure design optimization,” in Proc. IEEE Congr. Evol. Comput., 2019, pp. 3404–3409.
    [34]
    F. Ming, W. Gong, L. Wang, and L. Gao, “Constrained multiobjective optimization via multitasking and knowledge transfer,” IEEE Trans. Evol. Comput., vol. 28, no. 1, pp. 77–89, Feb. 2024. doi: 10.1109/TEVC.2022.3230822
    [35]
    Y. Feng, L. Feng, S. Kwong, and K. C. Tan, “A multivariation multifactorial evolutionary algorithm for large-scale multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 26, no. 2, pp. 248–262, Apr. 2022. doi: 10.1109/TEVC.2021.3119933
    [36]
    X. Ji, Y. Zhang, D. Gong, X. Sun, and Y. Guo, “Multisurrogate-assisted multitasking particle swarm optimization for expensive multimodal problems,” IEEE Trans. Cybern., vol. 53, no. 4, pp. 2516–2530, Apr. 2023. doi: 10.1109/TCYB.2021.3123625
    [37]
    Y. Jing, L. Hu, W.-S. Ku, and C. Shahabi, “Authentication of k nearest neighbor query on road networks,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 6, pp. 1494–1506, Jun. 2014.
    [38]
    S. S. Mullick, S. Datta, and S. Das, “Adaptive learning-based k-nearest neighbor classifiers with resilience to class imbalance,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 11, pp. 5713–5725, Nov. 2018. doi: 10.1109/TNNLS.2018.2812279
    [39]
    Q. Kang, S. Y. Yao, M. C. Zhou, K. Zhang, and A. Abusorrah, “Effective visual domain adaptation via generative adversarial distribution matching,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 9, pp. 3919–3929, Sep. 2021. doi: 10.1109/TNNLS.2020.3016180
    [40]
    S. Yao, Q. Kang, M. Zhou, M. Rawa, and A. Abusorrah, “A survey of transfer learning for machinery diagnostics and prognostics,” Artif. Intell. Rev., pp. 1–52, Aug. 2022. [Online]. Available: https://doi.org/10.1007/s10462-022-10230-4.
    [41]
    M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, “Transfer feature learning with joint distribution adaptation,” in Proc. IEEE Int. Conf. Comput. Vis., Dec. 2013, pp. 2200–2207.
    [42]
    M. Long, J. Wang, G. Ding, S. J. Pan, and P. S. Yu, “Adaptation regularization: A general framework for transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 5, pp. 1076–1089, May 2014. doi: 10.1109/TKDE.2013.111
    [43]
    J. Geng, X. Deng, X. Ma, and W. Jiang, “Transfer learning for SAR image classification via deep joint distribution adaptation networks,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 8, pp. 5377–5392, Aug. 2020. doi: 10.1109/TGRS.2020.2964679
    [44]
    Z. Liang, H. Dong, C. Liu, W. Liang, and Z. Zhu, “Evolutionary multitasking for multiobjective optimization with subspace alignment and adaptive differential evolution,” IEEE Trans. Cybern., vol. 52, no. 4, pp. 2097–2109, Apr. 2022.
    [45]
    K. Li, K. Pang, Y.-Z. Song, T. M. Hospedales, T. Xiang, and H. Zhang, “Synergistic instance-level subspace alignment for fine-grained sketch-based image retrieval,” IEEE Trans. Image Process., vol. 26, no. 12, pp. 5908–5921, Dec. 2017.
    [46]
    X. Chen, F.-K. Gong, G. Li, and H. Ding, “Interference subspace alignment in multiple-multicast networks,” IEEE Trans. Veh. Technol., vol. 68, no. 9, pp. 8853–8865, Sep. 2019.
    [47]
    R. Gui, X. Xu, R. Yang, L. Wang, and F. Pu, “Statistical scattering component-based subspace alignment for unsupervised cross-domain PolSAR image classification,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 7, pp. 5449–5463, Jul. 2021. doi: 10.1109/TGRS.2020.3028906
    [48]
    S. Gupta, S. Singh, R. Su, S. Gao, and J. C. Bansal, “Multiple elite individual guided piecewise search-based differential evolution,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 1, pp. 135–158, Jan. 2023. doi: 10.1109/JAS.2023.123018
    [49]
    Y. Yu, S. Gao, M. Zhou, Y. Wang, Z. Lei, T. Zhang, and J. Wang, “Scale-free network-based differential evolution to solve function optimization and parameter estimation of photovoltaic models,” Swarm and Evolutionary Computation, vol. 74, pp. 1–19, Oct. 2022.
    [50]
    Y. Yu, S. Gao, Y. Wang, and Y. Todo, “Global optimum-based search differential evolution,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 2, pp. 379–394, Mar. 2019. doi: 10.1109/JAS.2019.1911378
    [51]
    Y. Yuan, Y.-S. Ong, L. Feng, A. K. Qin, A. Gupta, B. Da, Q. Zhang, K. C. Tan, Y. Jin, and H. Ishibuchi, “Evolutionary multitasking for multiobjective continuous optimization: Benchmark problems, performance metrics and baseline results,” Jun. 2017. [Online]. Available: arXiv: 1706.02766.
    [52]
    H. Li and Q. Zhang, “Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II,” IEEE Trans. Evol. Comput., vol. 13, no. 2, pp. 284–302, Apr. 2009. doi: 10.1109/TEVC.2008.925798
    [53]
    “Supplementary File” [Online], Available: https://pan.baidu.com/s/1eT0aG4xmJq6CQsrc0OduPg with password KCEM.
    [54]
    J. Derrac et al., “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm Evol. Comput., vol. 1, no. 1, pp. 3–18, Mar. 2011. doi: 10.1016/j.swevo.2011.02.002
    [55]
    K. Liu, Z. Wei, C. Zhang, Y. Shang, R. Teodorescu, and Q.-L. Han, “Towards long lifetime battery: AI-based manufacturing and management,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 7, pp. 1139-1165, Jul. 2022.
    [56]
    X. Wang, L. Liu, L. Duan, and Q. Liao, “Multi-objective optimization for an industrial grinding and classification process based on PBM and RSM,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 11, pp. 2124–2135, Nov. 2023. doi: 10.1109/JAS.2023.123333
    [57]
    F. Ming, W. Gong, L. Wang, and Y. Jin, “Constrained multi-objective optimization with deep reinforcement learning assisted operator selection,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 4, pp. 919–931, Apr. 2024. doi: 10.1109/JAS.2023.123687
    [58]
    Z.-J. Wang, Z.-H. Zhan, S. Kwong, et al., “Adaptive granularity learning distributed particle swarm optimization for large-scale optimization,” IEEE Trans. Cybern. , vol. 51, no. 3, pp. 1175–1188, Mar. 2021.
  • Supplementary File _JAS-2024-1335.R1.docx

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(4)  / Tables(6)

    Article Metrics

    Article views (18) PDF downloads(4) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return