A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 6 Issue 3
May  2019

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Zhe Chen, Jing Zhang and Dacheng Tao, "Progressive LiDAR Adaptation for Road Detection," IEEE/CAA J. Autom. Sinica, vol. 6, no. 3, pp. 693-702, May 2019. doi: 10.1109/JAS.2019.1911459
Citation: Zhe Chen, Jing Zhang and Dacheng Tao, "Progressive LiDAR Adaptation for Road Detection," IEEE/CAA J. Autom. Sinica, vol. 6, no. 3, pp. 693-702, May 2019. doi: 10.1109/JAS.2019.1911459

Progressive LiDAR Adaptation for Road Detection

doi: 10.1109/JAS.2019.1911459
Funds:

Australian Research Council Projects FL-170100117

Australian Research Council Projects DP-180103424

Australian Research Council Projects IH-180100002

National Natural Science Foundation of China (NSFC) 61806062

More Information
  • Despite rapid developments in visual image-based road detection, robustly identifying road areas in visual images remains challenging due to issues like illumination changes and blurry images. To this end, LiDAR sensor data can be incorporated to improve the visual image-based road detection, because LiDAR data is less susceptible to visual noises. However, the main difficulty in introducing LiDAR information into visual image-based road detection is that LiDAR data and its extracted features do not share the same space with the visual data and visual features. Such gaps in spaces may limit the benefits of LiDAR information for road detection. To overcome this issue, we introduce a novel Progressive LiDAR adaptation-aided road detection (PLARD) approach to adapt LiDAR information into visual image-based road detection and improve detection performance. In PLARD, progressive LiDAR adaptation consists of two subsequent modules: 1) data space adaptation, which transforms the LiDAR data to the visual data space to align with the perspective view by applying altitude difference-based transformation; and 2) feature space adaptation, which adapts LiDAR features to visual features through a cascaded fusion structure. Comprehensive empirical studies on the well-known KITTI road detection benchmark demonstrate that PLARD takes advantage of both the visual and LiDAR information, achieving much more robust road detection even in challenging urban scenes. In particular, PLARD outperforms other state-of-the-art road detection models and is currently top of the publicly accessible benchmark leader-board.

     

  • loading
  • [1]
    J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation, " in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3431-3440.
    [2]
    L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "Semantic image segmentation with deep convolutional nets and fully connected crfs, " in ICLR, 2015.[Online]. Available: http://arXiv.org/abs/1412.7062
    [3]
    P. Y. Shinzato, D. F. Wolf, and C. Stiller, "Road terrain detection: Avoiding common obstacle detection assumptions using sensor fusion, " in Proceedings of the IEEE Intelligent Vehicles Symposium Proceedings. IEEE, 2014, pp. 687-692.
    [4]
    T. Kühnl, F. Kummert, and J. Fritsch, "Spatial ray features for real-time ego-lane extraction, " in VEHIT, IEEE, 2012, pp. 288-293.
    [5]
    L. Xiao, B. Dai, D. Liu, T. Hu, and T. Wu, "Crf based road detection with multi-sensor fusion, " in Proceeding of the Intelligent Vehicles Symposium (Ⅳ). IEEE, 2015, pp. 192-198.
    [6]
    C. C. T. Mendes, V. Frémont, and D. F. Wolf, "Vision-based road detection using contextual blocks, " arXiv: 1509.01122, 2015.
    [7]
    D. Levi, N. Garnett, E. Fetaya, and I. Herzlyia, "Stixelnet: A deep convolutional network for obstacle detection and road segmentation." BMVC, 2015.
    [8]
    C. C. T. Mendes, V. Frmont, and D. F. Wolf, "Exploiting fully convolutional neural networks for fast road detection, " in Proceeding of the IEEE International Conference on Robotics and Automation (ICRA), 2016.
    [9]
    R. Mohan, "Deep deconvolutional networks for scene parsing, " arXiv/1411.4101, 2014.
    [10]
    Z. Chen and Z. Chen, "Rbnet: A deep neural network for unified road and road boundary detection, " in Proceedings of the International Conference on Neural Information Processing, Springer, 2017, pp. 677-687.
    [11]
    L. Caltagirone, M. Bellone, L. Svensson, and M. Wahde, "Lidar-camera fusion for road detection using fully convolutional neural networks, " Proceedings of the IEEE Robotics and Autonomous Systems, 2018.
    [12]
    L. Caltagirone, S. Scheidegger, L. Svensson, and M. Wahde, "Fast lidar-based road detection using fully convolutional neural networks, " in Proceedings of the IEEE Intelligent Vehicles Symposium (Ⅳ), IEEE, 2017, pp. 1019-1024.
    [13]
    L. Chen, J. Yang, and H. Kong, "Lidar-histogram for fast road and obstacle detection, " in Proceedings of the IEEE Robotics and Automation, IEEE, 2017, pp. 1343-1348.
    [14]
    Lidar.[Online]. Available: https://en.wikipedia.org/wiki/Lidar
    [15]
    L. Xiao, R. Wang, B. Dai, Y. Fang, D. Liu, and T. Wu, "Hybrid conditional random field based camera-lidar fusion for road detection, " Information Sciences, 2017.
    [16]
    A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? the kitti vision benchmark suite, " in Proceeding of the Conference on Computer Vision and Pattern Recognition, 2012.
    [17]
    A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, "Vision meets robotics: The kitti dataset, " International Journal of Robotics Research, 2013. http://d.old.wanfangdata.com.cn/NSTLQK/NSTL_QKJJ0231044522/
    [18]
    L. Qi, M. Zhou, and W. Luan, "A dynamic road incident information delivery strategy to reduce urban traffic congestion, " IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 5, pp. 934-945, 2018. doi: 10.1109/JAS.2018.7511165
    [19]
    L. Chen, X. Hu, W. Tian, H. Wang, D. Cao, and F. Wang, "Parallel planning: a new motion planning framework for autonomous driving, " IEEE/CAA Journal of Automatica Sinica, pp. 1-12, 2018. doi: 10.1109/JAS.2018.7511186
    [20]
    H. Kong, J.-Y. Audibert, and J. Ponce, "Vanishing point detection for road detection, " in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). 2009, pp. 96-103.
    [21]
    Z. Chen, X. You, B. Zhong, J. Li, and D. Tao, "Dynamically modulated mask sparse tracking, " IEEE transactions on cybernetics, vol. 47, no. 11, pp. 3706-3718, 2017. doi: 10.1109/TCYB.2016.2577718
    [22]
    Z. Chen, J. Li, Z. Chen, and X. You, "Generic pixel level object tracker using bi-channel fully convolutional network, " in Proceedings of the International Conference on Neural Information Processing, Springer, 2017, pp. 666-676.
    [23]
    Z. Chen, S. Huang, and D. Tao, "Context refinement for object detection, " in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 71-86.
    [24]
    Y. Xing, C. Lv, L. Chen, H. Wang, H. Wang, D. Cao, E. Velenis, and F.-Y. Wang, "Advances in vision-based lane detection: Algorithms, integration, assessment, and perspectives on acp-based parallel vision, " IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 3, pp. 645-661, 2018. doi: 10.1109/JAS.2018.7511063
    [25]
    X. Han, J. Lu, C. Zhao, S. You, and H. Li, "Semi-supervised and weakly-supervised road detection based on generative adversarial networks, " IEEE Signal Processing Letters, 2018.
    [26]
    J. Munoz-Bulnes, C. Fernandez, I. Parra, D. Fernández-Llorca, and M. A. Sotelo, "Deep fully convolutional networks with random data augmentation for enhanced generalization in road detection, " in Proceedings of the International Conference on Intelligent Transportation Systems (ITSC), IEEE, 2017, pp. 366-371.
    [27]
    D. Munoz, J. A. Bagnell, and M. Hebert, "Stacked hierarchical labeling, " in Proceedings of the European Conference on Computer Vision (ECCV), Springer, 2010, pp. 57-70.
    [28]
    M. Aly, "Real time detection of lane markers in urban streets, " in Proceedings of Intelligent Vehicles Symposium, IEEE, 2008, pp. 7-12.
    [29]
    A. Laddha, M. K. Kocamaz, L. E. Navarro-Serment, and M. Hebert, "Map-supervised road detection, " in Proceedings of Intelligent Vehicles Symposium, IEEE, 2016, pp. 118-123.
    [30]
    J. M. Alvarez, M. Salzmann, and N. Barnes, "Learning appearance models for road detection, " in Proceedings of Intelligent Vehicles Symposium, IEEE, 2013, pp. 423-429.
    [31]
    S. Zhou, J. Gong, G. Xiong, H. Chen, and K. Iagnemma, "Road detection using support vector machine based on online learning and evaluation, " in Proceedings of Intelligent Vehicles Symposium, IEEE, 2010, pp. 256-261.
    [32]
    L. Xiao, B. Dai, D. Liu, D. Zhao, and T. Wu, "Monocular road detection using structured random forest, " International Journal of Advanced Robotic Systems, vol. 13, no. 3, 2016.
    [33]
    L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. Yuille, "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS, " IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. http://d.old.wanfangdata.com.cn/Periodical/ckjs201811008
    [34]
    C. Liang-Chieh, G. Papandreou, I. Kokkinos, K. Murphy, and A. Yuille, "Semantic image segmentation with deep convolutional nets and fully connected crfs, " in ICLR, 2015.
    [35]
    F. Yu and V. Koltun, "Multi-scale context aggregation by dilated convolutions, " arXiv: 1511.07122, 2015.
    [36]
    G. Lin, A. Milan, C. Shen, and I. Reid, "Refinenet: Multi-path refinement networks for high-resolution semantic segmentation, " in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
    [37]
    F. Yu, V. Koltun, and T. Funkhouser, "Dilated residual networks, " in Proceedings of Computer Vision and Pattern Recognition, vol. 1, 2017.
    [38]
    M. Teichmann, M. Weber, M. Zoellner, R. Cipolla, and R. Urtasun, "Multinet: Real-time joint semantic reasoning for autonomous driving, " arXiv: 1612.07695, 2016.
    [39]
    G. L. Oliveira, W. Burgard, and T. Brox, "Efficient deep methods for monocular road segmentation, " in IROS, 2016.
    [40]
    O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation, " in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234-241.
    [41]
    K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition, " in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778.
    [42]
    H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, "Pyramid scene parsing network, " in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2881-2890.
    [43]
    J. Fritsch, T. Kuehnl, and A. Geiger, "A new performance measure and evaluation benchmark for road detection algorithms, " in ITSC, 2013.
    [44]
    N. Garnett, S. Silberstein, S. Oron, E. Fetaya, U. Verner, A. Ayash, V. Goldner, R. Cohen, K. Horn, and D. Levi, "Real-time category-based and general obstacle detection for autonomous driving, " in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 198-205.
    [45]
    P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell, "Understanding convolution for semantic segmentation, " in Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, pp. 1451-1460.
    [46]
    G. Neuhold, T. Ollmann, S. Rota Bulò, and P. Kontschieder, "The mapillary vistas dataset for semantic understanding of street scenes, " in Proceedings of the International Conference on Computer Vision (ICCV), 2017. [Online]. Available: https://www.mapillary.com/dataset/vistas

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(3)

    Article Metrics

    Article views (2382) PDF downloads(113) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return