A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 11 Issue 2
Feb.  2024

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
L. Yan, Q. Li, and K. Li, “Object helps U-Net based change detectors,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 548–550, Feb. 2024. doi: 10.1109/JAS.2023.124032
Citation: L. Yan, Q. Li, and K. Li, “Object helps U-Net based change detectors,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 548–550, Feb. 2024. doi: 10.1109/JAS.2023.124032

Object Helps U-Net Based Change Detectors

doi: 10.1109/JAS.2023.124032
More Information
  • loading
  • [1]
    L. Yan, W. Zheng, and F.-Y. Wang, “Heterogeneous image knowledge driven visual perception,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 1, pp. 255–257, 2024. doi: 10.1109/JAS.2023.123435
    [2]
    W. Zheng, L. Yan, C. Gou, and F.-Y. Wang, “Computational knowledge vision: Paradigmatic knowledge based prescriptive learning and reasoning for perception and vision,” Artificial Intelligence Review, vol. 55, no. 8, pp. 5917–5952, 2022.
    [3]
    Z. Qin, X. Lu, X. Nie, D. Liu, Y. Yin, and W. Wang, “Coarse-to-fine video instance segmentation with factorized conditional appearance flows,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 5, pp. 1192–1208, 2023. doi: 10.1109/JAS.2023.123456
    [4]
    Y. Wang, P.-M. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, and P. Ishwar, “CDnet 2014: An expanded change detection benchmark dataset,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, 2014, pp. 387–394.
    [5]
    M. Duan, K. Li, X. Liao, and K. Li, “A parallel multiclassification algorithm for big data using an extreme learning machine,” IEEE Trans. Neural Networks and Learning Systems, vol. 29, no. 6, pp. 2337–2351, 2018. doi: 10.1109/TNNLS.2017.2654357
    [6]
    L. A. Lim and H. Y. Keles, “Learning multi-scale features for foreground segmentation,” Pattern Analysis and Applications, vol. 23, pp. 1369–1380, 2020. doi: 10.1007/s10044-019-00845-9
    [7]
    W. Zheng, K. Wang, and F.-Y. Wang, “A novel background subtraction algorithm based on parallel vision and bayesian GANS,” Neurocomputing, vol. 394, pp. 178–200, 2020. doi: 10.1016/j.neucom.2019.04.088
    [8]
    W. Zheng, K. Wang, and F. Wang, “Background subtraction algorithm based on bayesian generative adversarial networks,” ACTA Automatica Sinica, vol. 44, no. 5, pp. 878–890, 2018.
    [9]
    G. Rahmon, F. Bunyak, G. Seetharaman, and K. Palaniappan, “Motion U-Net: Multi-cue encoder-decoder network for motion segmentation,” in Proc. Int. Conf. Pattern Recognition, 2021, pp. 8125–8132.
    [10]
    O. Tezcan, P. Ishwar, and J. Konrad, “BSUV-Net: A fully-convolutional neural network for background subtraction of unseen videos,” in Proc. IEEE/CVF Winter Conf. Applications of Computer Vision, 2020, pp. 2774–2783.
    [11]
    L. A. Lim and H. Y. Keles, “Foreground segmentation using convolutional neural networks for multiscale feature encoding,” Pattern Recognition Letters, vol. 112, pp. 256–262, 2018. doi: 10.1016/j.patrec.2018.08.002
    [12]
    G. Jocher, A. Chaurasia, and J. Qiu, “Ultralytics YOLOv8,” 2023.[Online]. Available: https://github.com/ultralytics/ultralytics.
    [13]
    S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE TPAMI, vol. 39, no. 6, pp. 1137–1149, 2017. doi: 10.1109/TPAMI.2016.2577031

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(4)  / Tables(1)

    Article Metrics

    Article views (103) PDF downloads(13) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return