A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 1
Jan.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
W. Jang, J. Hyun, J. An, M. Cho, and E. Kim, “A lane-level road marking map using a monocular camera,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 1, pp. 187–204, Jan. 2022. doi: 10.1109/JAS.2021.1004293
Citation: W. Jang, J. Hyun, J. An, M. Cho, and E. Kim, “A lane-level road marking map using a monocular camera,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 1, pp. 187–204, Jan. 2022. doi: 10.1109/JAS.2021.1004293

A Lane-Level Road Marking Map Using a Monocular Camera

doi: 10.1109/JAS.2021.1004293
Funds:  This work was supported by the Industry Core Technology Development Project, 20005062, Development of Artificial Intelligence Robot Autonomous Navigation Technology for Agile Movement in Crowded Space, funded by the Ministry of Trade, industry & Energy (MOTIE, Republic of Korea)
More Information
  • The essential requirement for precise localization of a self-driving car is a lane-level map which includes road markings (RMs). Obviously, we can build the lane-level map by running a mobile mapping system (MMS) which is equipped with a high-end 3D LiDAR and a number of high-cost sensors. This approach, however, is highly expensive and ineffective since a single high-end MMS must visit every place for mapping. In this paper, a lane-level RM mapping system using a monocular camera is developed. The developed system can be considered as an alternative to expensive high-end MMS. The developed RM map includes the information of road lanes (RLs) and symbolic road markings (SRMs). First, to build a lane-level RM map, the RMs are segmented at pixel level through the deep learning network. The network is named RMNet. The segmented RMs are then gathered to build a lane-level RM map. Second, the lane-level map is improved through loop-closure detection and graph optimization. To train the RMNet and build a lane-level RM map, a new dataset named SeRM set is developed. The set is a large dataset for lane-level RM mapping and it includes a total of 25157 pixel-wise annotated images and 21000 position labeled images. Finally, the proposed lane-level map building method is applied to SeRM set and its validity is demonstrated through experimentation.

     

  • loading
  • [1]
    Y. F. Ma, Z. Y. Wang, H. Yang, and L. Yang, “Artificial intelligence applications in the development of autonomous vehicles: A survey,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 2, pp. 315–329, Mar. 2020. doi: 10.1109/JAS.2020.1003021
    [2]
    J. Van Brummelen, M. O’Brien, D. Gruyer, and H. Najjaran, “Autonomous vehicle perception: The technology of today and tomorrow,” Transp. Res. Part C Emerg. Technol., vol. 89, pp. 384–406, Apr. 2018. doi: 10.1016/j.trc.2018.02.012
    [3]
    H. Jo, W. Lee, and E. Kim, “Mixture density-PoseNet and its application to monocular camera-based global localization,” IEEE Trans. Industr. Inform., vol. 17, no. 1, pp. 388–397, Jan. 2021. doi: 10.1109/TII.2020.2986086
    [4]
    L. Chen, X. M. Hu, W. Tian, H. Wang, D. P. Cao, and F. Y. Wang, “Parallel planning: A new motion planning framework for autonomous driving,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 1, pp. 236–246, Jan. 2019. doi: 10.1109/JAS.2018.7511186
    [5]
    M. Schreiber, C. Knöppel, and U. Franke, “Laneloc: Lane marking based localization using highly accurate maps,” in Proc. IEEE Intelligent Vehicles Symp., Gold Coast, Australia, 2013, pp. 449−454.
    [6]
    Y. Lu, J. W. Huang, Y. T. Chen, and B. Heisele, “Monocular localization in urban environments using road markings,” in Proc. IEEE Intelligent Vehicles Symp., Los Angeles, USA, 2017, pp. 468−474.
    [7]
    R. P. D. Vivacqua, M. Bertozzi, P. Cerri, F. N. Martins, and R. F. Vassallo, “Self-localization based on visual lane marking maps: An accurate low-cost approach for autonomous driving,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 2, pp. 582–597, Feb. 2018. doi: 10.1109/TITS.2017.2752461
    [8]
    D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, and L. Van Gool, “Towards end-to-end lane detection: An instance segmentation approach,” in Proc. IEEE Intelligent Vehicles Symp., Changshu, China, 2018, pp. 286−291.
    [9]
    W. Van Gansbeke, B. De Brabandere, D. Neven, M. Proesmans, and L. Van Gool, “End-to-end lane detection through differentiable least-squares fitting,” in Proc. IEEE/CVF Int. Conf. Computer Vision Workshop, Seoul, Korea (South), 2019, pp. 905−913.
    [10]
    S. Yoo, H. S. Lee, H. Myeong, S. Yun, H. Park, J. Cho, and D. H. Kim, “End-to-end lane marker detection via row-wise classification,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops, Seattle, USA, 2020, pp. 4335−4343.
    [11]
    N. Garnett, R. Cohen, T. Pe’er, R. Lahav, and D. Levi, “3D-LaneNet: End-to-end 3D multiple lane detection,” in Proc. IEEE Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 2921−2930.
    [12]
    Y. L. Guo, G. Chen, P. T. Zhao, W. D. Zhang, J. H. Miao, J. G. Wang, and T. E. Choe, “Gen-LaneNet: A generalized and scalable approach for 3D lane detection,” in Proc. European Conf. Computer Vision, Glasgow, UK, 2020, pp. 666−681.
    [13]
    M. Ghafoorian, C. Nugteren, N. Baka, O. Booij, and M. Hofmann, “EL-GAN: Embedding loss driven generative adversarial networks for lane detection,” in Proc. European Conf. Computer Vision Workshop, Munich, Germany, 2019, pp. 256−272.
    [14]
    Y. Lee, J. Lee, Y. Hong, Y. Ko, and M. Jeon, “Unconstrained road marking recognition with generative adversarial networks,” in Proc. IEEE Intelligent Vehicles Symp., Paris, France, 2019, pp. 1414−1419.
    [15]
    S. Lee, J. Kim, J. S. Yoon, S. Shin, O. Bailo, N. Kim, T. H. Lee, H. S. Hong, S. H. Han, and I. S. Kweon, “VPGNet: Vanishing point guided network for lane and road marking detection and recognition,” in Proc. IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 1965−1973.
    [16]
    R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, “ORB-SLAM: A versatile and accurate monocular SLAM system,” IEEE Trans. Robot., vol. 31, no. 5, pp. 1147–1163, Oct. 2015. doi: 10.1109/TRO.2015.2463671
    [17]
    J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in Proc. European Conf. Computer Vision, Zurich, Switzerland, 2014, pp. 834−849.
    [18]
    J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 3, pp. 611–625, Mar. 2018. doi: 10.1109/TPAMI.2017.2658577
    [19]
    D. Galvez-López and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Trans. Robot., vol. 28, no. 5, pp. 1188–1197, Oct. 2012. doi: 10.1109/TRO.2012.2197158
    [20]
    R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: A general framework for graph optimization,” in Proc. Int. Conf. Robotics and Automation, Shanghai, China, 2011, pp. 3607−3613.
    [21]
    V. Ilci and C. Toth, “High definition 3D map creation using GNSS/IMU/LiDAR sensor integration to support autonomous vehicle navigation,” Sensors, vol. 20, no. 3, Article No. 899, Feb. 2020. doi: 10.3390/s20030899
    [22]
    R. Wan, Y. C. Huang, R. C. Xie, and P. Ma, “Combined lane mapping using a mobile mapping system,” Remote Sens., vol. 11, no. 3, pp. 305–330, Feb. 2019. doi: 10.3390/rs11030305
    [23]
    N. Sairam, S. Nagarajan, and S. Ornitz, “Development of mobile mapping system for 3D road asset inventory,” Sensors, vol. 16, no. 3, pp. 367–386, Mar. 2016. doi: 10.3390/s16030367
    [24]
    T. Heidenreich, J. Spehr, and C. Stiller, “LaneSLAM – Simultaneous pose and lane estimation using maps with lane-level accuracy,” in Proc. IEEE Int. Conf. Intelligent Transportation Systems, Gran Canaria, Spain, 2015, pp. 2512−2517.
    [25]
    E. Rehder and A. Albrecht, “Submap-based SLAM for road markings,” in Proc. IEEE Intelligent Vehicles Symp., Seoul, Korea (South), 2015, pp. 1393−1398.
    [26]
    J. Jeong, Y. Cho, and A. Kim, “Road-SLAM: Road marking based SLAM with lane-level accuracy,” in Proc. IEEE Intelligent Vehicles Symp., Los Angeles, USA, 2017, pp. 1736−1473.
    [27]
    W. Jang, J. An, S. Lee, M. Cho, M. Sun, and E. Kim, “Road lane semantic segmentation for high definition map,” in Proc. IEEE Intelligent Vehicles Symp., Changshu, China, 2018, pp. 1001−1006.
    [28]
    T. Wu and A. Ranganathan, “A practical system for road marking detection and recognition,” in Proc. IEEE Intelligent Vehicles Symp., Madrid, Spain, 2012, pp. 25−30.
    [29]
    J. K. Suhr and H. G. Jung, “Fast symbolic road marking and stop-line detection for vehicle localization,” in Proc. IEEE Intelligent Vehicles Symp., Seoul, Korea (South), 2015, pp. 186−191.
    [30]
    S. Jung, J. Youn, and S. Sull, “Efficient lane detection based on spatiotemporal images,” IEEE Trans. Intell. Transp. Syst., vol. 17, no. 1, pp. 289–295, Jan. 2016. doi: 10.1109/TITS.2015.2464253
    [31]
    M. Aly, “Real time detection of lane markers in urban streets,” in Proc. IEEE Intelligent Vehicles Symp., Eindhoven, Netherlands, 2008, pp. 7−12.
    [32]
    J. Fritsch, T. Kühnl, and A. Geiger, “A new performance measure and evaluation benchmark for road detection algorithms,” in Proc. IEEE Int. Conf. Intelligent Transportation Systems, The Hague, Netherlands, 2013, pp. 1693−1700.
    [33]
    TuSimple: Lane detection challenge. 2017. [Online]. Available: https://github.com/TuSimple/tusimple-benchmark/tree/master/doc/lane_detection.
    [34]
    B. Roberts, S. Kaltwang, S. Samangooei, M. Pender-Bare, K. Tertikas, and J. Redford, “A dataset for lane instance segmentation in urban environments,” in Proc. European Conf. Computer Vision, Munich, Germany, 2018, pp. 543−559.
    [35]
    X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang, “Spatial as deep: Spatial CNN for traffic scene understanding,” in Proc. Association for the Advancement of Artificial Intelligence, 2018.
    [36]
    T. Veit, J. P. Tarel, P. Nicolle, and P. Charbonnier, “Evaluation of road marking feature extraction,” in Proc. IEEE Int. Conf. Intelligent Transportation Systems, Beijing, China, 2008, pp. 174−181.
    [37]
    G. J. Brostow, J. Shotton, J. Fauqueur, and R. Cipolla, “Segmentation and recognition using structure from motion point clouds,” in Proc. European Conf. Computer Vision, Marseille, France, 2008, pp 44−57.
    [38]
    X. L. Liu, Z. D. Deng, H. C. Lu, and L. L. Cao, “Benchmark for road marking detection: Dataset specification and performance baseline,” in Proc. IEEE Int. Conf. Intelligent Transportation Systems, Yokohama, Japan, 2017, pp. 1−6.
    [39]
    G. Neuhold, T. Ollmann, S. R. Bulò, and P. Kontschieder, “The mapillary vistas dataset for semantic understanding of street scenes,” in Proc. IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 5000−5009.
    [40]
    X. Y. Huang, X. J. Cheng, Q. C. Geng, B. B. Cao, D. F. Zhou, P. Wang, Y. Q. Lin, and R. G. Yang, “The ApolloScape dataset for autonomous driving,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops, Salt Lake City, USA, 2018, pp. 1067−1073.
    [41]
    F. Yu, W. Q. Xian, Y. Y. Chen, F. C. Liu, M. K. Liao, V. Madhavan, and T. Darrell, “BDD100K: A diverse driving video database with scalable annotation tooling,” arXiv preprint arXiv: 1805.04687, 2020.
    [42]
    P. J. Besl and N. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239–256, Feb. 1992. doi: 10.1109/34.121791
    [43]
    J. Wang, T. Mei, B. Kong, and H. Wei, “An approach of lane detection based on inverse perspective mapping,” in Proc. IEEE Int. Conf. Intelligent Transportation Systems, Qingdao, China, 2014, pp. 35−38.
    [44]
    E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 640–651, 2017. doi: 10.1109/TPAMI.2016.2572683
    [45]
    T. Y. Lin, P. Goyal, R. Girshick, K. M. He, and P. Dollár, “Focal loss for dense object detection,” in Proc. IEEE Int. Conf. Computer Vision, Venice, Italy, pp. 2999−3007, 2017.
    [46]
    K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 770−778.
    [47]
    C. Q. Yu, J. B. Wang, C. Peng, C. X. Gao, G. Yu, and N. Sang, “BiSeNet: Bilateral segmentation network for real-time semantic segmentation,” in Proc. European Conf. Computer Vision, Munich, Germany, 2018, pp. 334−349.
    [48]
    M. Oršic, I. Krešo, P. Bevandic, and S. Šegvic, “In defense of Pre-Trained ImageNet architectures for real-time semantic segmentation of road-driving images,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 12599−12608.
    [49]
    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. Learning Representations, San Diego, USA, 2015.
    [50]
    X. Luo, M. C. Zhou, Y. N. Xia, Q. S. Zhu, A. C. Ammari, and A. Alabdulwahab, “Generating highly accurate predictions for missing QoS data via aggregating nonnegative latent factor models,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 3, pp. 524–537, Mar. 2016. doi: 10.1109/TNNLS.2015.2412037
    [51]
    X. Luo, M. C. Zhou, S. Li, and M. S. Shang, “An inherently nonnegative latent factor model for high-dimensional and sparse matrices from industrial applications,” IEEE Trans. Ind. Inf., vol. 14, no. 5, pp. 2011–2022, 2018. doi: 10.1109/TII.2017.2766528
    [52]
    X. Luo, M. C. Zhou, S. Li, Y. N. Xia, Z. H. You, Q. S. Zhu, and H. Leung, “Incorporation of efficient second-order solvers into latent factor models for accurate prediction of missing QoS data,” IEEE Trans. Cybern., vol. 48, no. 4, pp. 1216–1228, Apr. 2018. doi: 10.1109/TCYB.2017.2685521
    [53]
    X. Luo, H. Wu, H. Q. Yuan, and M. C. Zhou, “Temporal pattern-aware QoS prediction via biased non-negative latent factorization of tensors,” IEEE Trans. Cybern., vol. 50, no. 5, pp. 1798–1809, May 2020. doi: 10.1109/TCYB.2019.2903736
    [54]
    X. Luo, M. C. Zhou, S. Li, L. Hu, and M. S. Shang, “Non-negativity constrained missing data estimation for high-dimensional and sparse matrices from industrial applications,” IEEE Trans. Cybern., vol. 50, no. 5, pp. 1844–1855, May 2020. doi: 10.1109/TCYB.2019.2894283
    [55]
    S. C. Gao, M. C. Zhou, Y. R. Wang, J. J. Cheng, H. Yachi, and J. H. Wang, “Dendritic neuron model with effective learning algorithms for classification, approximation, and prediction,” IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 2, pp. 601–614, Feb. 2019. doi: 10.1109/TNNLS.2018.2846646

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(20)  / Tables(9)

    Article Metrics

    Article views (2096) PDF downloads(225) Cited by()

    Highlights

    • A lane-level RM map is built using only a monocular camera and wheel encoder
    • RMNet was developed to train on SeRM dataset for road marking segmentation
    • Class-weighted loss and class-weighted focal loss are proposed to handle class imbalance problem
    • The semantic road mark mapping (SeRM) dataset is developed for effective RM segmentation and mapping

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return