A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 7 Issue 4
Jun.  2020

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Liang Yang, Bing Li, Wei Li, Howard Brand, Biao Jiang and Jizhong Xiao, "Concrete Defects Inspection and 3D Mapping Using CityFlyer Quadrotor Robot," IEEE/CAA J. Autom. Sinica, vol. 7, no. 4, pp. 991-1002, July 2020. doi: 10.1109/JAS.2020.1003234
Citation: Liang Yang, Bing Li, Wei Li, Howard Brand, Biao Jiang and Jizhong Xiao, "Concrete Defects Inspection and 3D Mapping Using CityFlyer Quadrotor Robot," IEEE/CAA J. Autom. Sinica, vol. 7, no. 4, pp. 991-1002, July 2020. doi: 10.1109/JAS.2020.1003234

Concrete Defects Inspection and 3D Mapping Using CityFlyer Quadrotor Robot

doi: 10.1109/JAS.2020.1003234
Funds:  This work was supported in part by the U.S. National Science Foundation (IIP-1915721), and the U.S. Department of Transportation, Office of the Assistant Secretary for Research and Technology (USDOTOST-R) (69A3551747126) through INSPIRE University Transportation Center (http://inspire-utc.mst.edu) at Missouri University of Science and Technology
More Information
  • The concrete aging problem has gained more attention in recent years as more bridges and tunnels in the United States lack proper maintenance. Though the Federal Highway Administration requires these public concrete structures to be inspected regularly, on-site manual inspection by human operators is time-consuming and labor-intensive. Conventional inspection approaches for concrete inspection, using RGB image-based thresholding methods, are not able to determine metric information as well as accurate location information for assessed defects for conditions. To address this challenge, we propose a deep neural network (DNN) based concrete inspection system using a quadrotor flying robot (referred to as CityFlyer) mounted with an RGB-D camera. The inspection system introduces several novel modules. Firstly, a visual-inertial fusion approach is introduced to perform camera and robot positioning and structure 3D metric reconstruction. The reconstructed map is used to retrieve the location and metric information of the defects. Secondly, we introduce a DNN model, namely AdaNet, to detect concrete spalling and cracking, with the capability of maintaining robustness under various distances between the camera and concrete surface. In order to train the model, we craft a new dataset, i.e., the concrete structure spalling and cracking (CSSC) dataset, which is released publicly to the research community. Finally, we introduce a 3D semantic mapping method using the annotated framework to reconstruct the concrete structure for visualization. We performed comparative studies and demonstrated that our AdaNet can achieve 8.41% higher detection accuracy than ResNets and VGGs. Moreover, we conducted five field tests, of which three are manual hand-held tests and two are drone-based field tests. These results indicate that our system is capable of performing metric field inspection, and can serve as an effective tool for civil engineers.

     

  • loading
  • 1 https://github.com/ccny-ros-pkg/pytorch_Concrete_Inspection
    † The first three authors are equally contributed
  • [1]
    N. Gucunski and H. Parvardeh, “Condition assessment of bridge deck using various nondestructive evaluation (NDE) technologies,” Center of Advanced Infrastructure and Transportation, Rutgers Univ., USA, Jun. 2015.
    [2]
    U.S. Department of Transportation Federal Highway Administration, “Specification for the national bridge inventory bridge elements,” U.S. Department of Transportation Federal Highway Administration, USA, 2014.
    [3]
    N. Y. D. of Transportation, “Bridge inspection manual,” Jan. 2016. [Online]. Available: https://www.dot.ny.gov/divisions/engineering/structures/manuals/bridge-inspection
    [4]
    B. William and E. Steve, “Tunnel operations, maintenance, inspection, and evaluation (TOMIE) manual,” Federal Highway Administration, Washington, DC, USA, Jul. 2015.
    [5]
    R. S. Lim, H. M. La, and W. H. Sheng, “A robotic crack inspection and mapping system for bridge deck maintenance,” IEEE Trans. Autom. Sci. Eng., vol. 11, no. 2, pp. 367–378, Jan. 2014. doi: 10.1109/TASE.2013.2294687
    [6]
    P. Prasanna, K. J. Dana, N. Gucunski, B. B. Basily, H. M. La, R. S. Lim, and H. Parvardeh, “Automated crack detection on concrete bridges,” IEEE Trans. Autom. Sci. Eng., vol. 13, no. 2, pp. 591–599, Oct. 2016. doi: 10.1109/TASE.2014.2354314
    [7]
    H. M. La, N. Gucunski, K. Dana, and S. H. Kee, “Development of an autonomous bridge deck inspection robotic system,” J. Field Rob., vol. 34, no. 8, pp. 1489–1504, Dec. 2017. doi: 10.1002/rob.21725
    [8]
    N. Hallermann and G. Morgenthal, “Visual inspection strategies for large bridges using unmanned aerial vehicles (UAV),” in Proc. 7th IABMAS, Int. Conf. Bridge Maintenance, Safety and Management, Washington, DC, USA, 2014, pp. 661–667.
    [9]
    Z. Ren, K. Qian, Z. X. Zhang, V. Pandit, A. Baird, and B. Schuller, “Deep scalogram representations for acoustic scene classification,” IEEE/CAA J. Autom. Sinica, vol. 5, no. 3, pp. 662–669, May 2018. doi: 10.1109/JAS.2018.7511066
    [10]
    D. Yu and J. Y. Li, “Recent progresses in deep learning based acoustic models,” IEEE/CAA J. Autom. Sinica, vol. 4, no. 3, pp. 396–409, Jul. 2017. doi: 10.1109/JAS.2017.7510508
    [11]
    Z. W. Wang, M. C. Zhou, G. G. Slabaugh, J. F. Zhai, and T. Fang, “Automatic detection of bridge deck condition from ground penetrating radar images,” IEEE Trans. Autom. Sci. Eng., vol. 8, no. 3, pp. 633–640, Dec. 2011. doi: 10.1109/TASE.2010.2092428
    [12]
    G. Li, S. H. He, Y. F. Ju, and K. Du, “Long-distance precision inspection method for bridge cracks with image processing,” Autom. Constr., vol. 41, pp. 83–95, May 2014. doi: 10.1016/j.autcon.2013.10.021
    [13]
    R. S. Adhikari, O. Moselhi, and A. Bagchi, “Image-based retrieval of concrete crack properties for bridge inspection,” Autom. Constr., vol. 39, pp. 180–194, Apr. 2014. doi: 10.1016/j.autcon.2013.06.011
    [14]
    M. R. Jahanshahi and S. F. Masri, “Adaptive vision-based crack detection using 3D scene reconstruction for condition assessment of structures,” Autom. Constr., vol. 22, pp. 567–576, Mar. 2012. doi: 10.1016/j.autcon.2011.11.018
    [15]
    S. K. Sinha and P. W. Fieguth, “Automated detection of cracks in buried concrete pipe images,” Autom. Constr., vol. 15, no. 1, pp. 58–72, Jan. 2006. doi: 10.1016/j.autcon.2005.02.006
    [16]
    T. H. Dinh, Q. P. Ha, and H. La, “Computer vision-based method for concrete crack detection,” in Proc. 14th Int. Conf. Control, Automation, Robotics and Vision, Phuket, Thailand, 2016, pp. 1–6.
    [17]
    L. L. Wu, S. Mokhtari, A. Nazef, B. Nam, and H. B. Yun, “Improvement of crack-detection accuracy using a novel crack defragmentation technique in image-based road assessment,” J. Comput. Civ. Eng., vol. 30, no. 1, pp. 04014118, Jan. 2016. doi: 10.1061/(ASCE)CP.1943-5487.0000451
    [18]
    L. Yang, B. Li, W. Li, Z. M. Liu, G. Y. Yang, and J. Z. Xiao, “A robotic system towards concrete structure spalling and crack database,” in Proc. IEEE Int. Conf. Robotics and Biomimetics, Macau, China, 2017, pp. 1276–1281.
    [19]
    L. Yang, B. Li, W. Li, Z. M. Liu, G. Y. Yang, and J. Z. Xiao, “Deep concrete inspection using unmanned aerial vehicle towards CSSC database,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Vancouver, Canada, 2017, pp. 24–27.
    [20]
    I. Dryanovski, R. G. Valenti, and J. Z. Xiao, “An open-source navigation system for micro aerial vehicles,” Auton. Rob., vol. 34, no. 3, pp. 177–188, Mar. 2013. doi: 10.1007/s10514-012-9318-8
    [21]
    R. G. Valenti, Y. D. Jian, K. Ni, and J. Z. Xiao, “An autonomous flyer photographer,” in Proc. IEEE Int. Conf. Cyber Technology in Automation, Control, and Intelligent Systems, Chengdu, China, 2016, pp. 273–278.
    [22]
    R. Mur-Artal and J. D. Tardòs, “ORB-SLAM2: An open-source slam system for monocular, stereo, and RGB-D cameras,” IEEE Trans. Rob., vol. 33, no. 5, pp. 1255–1262, Jun. 2017. doi: 10.1109/TRO.2017.2705103
    [23]
    H. Z. Fang, N. Tian, Y. B. Wang, M. C. Zhou, and M. A. Haile, “Nonlinear Bayesian estimation: From Kalman filtering to a broader horizon,” IEEE/CAA J. Autom. Sinica, vol. 5, no. 2, pp. 401–417, Feb. 2018. doi: 10.1109/JAS.2017.7510808
    [24]
    T. Y. Lin, P. Dollár, R. Girshick, K. M. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 936–944.
    [25]
    K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
    [26]
    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv: 1409.1556, Sept. 2014.
    [27]
    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. 25th Int. Conf. Neural Information Processing Systems, Lake Tahoe, USA, 2012, pp. 1097–1105.
    [28]
    H. Y. Xue, S. M. Zhang, and D. Cai, “Depth image inpainting: Improving low rank matrix completion with low gradient regularization,” IEEE Trans. Image Process., vol. 26, no. 9, pp. 4311–4320, Sept. 2017. doi: 10.1109/TIP.2017.2718183
    [29]
    Y. D. Zhang and T. Funkhouser, “Deep depth completion of a single RGB-D image,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 175–185.
    [30]
    O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. 18th Int. Conf. Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015, pp. 234–241.
    [31]
    A. Levin, D. Lischinski, and Y. Weiss, “Colorization using optimization,” in Proc. ACM SIGGRAPH, California, USA, 2004, pp. 689–694.
    [32]
    W. Q. Liu and S. E. Chen, “Reliability analysis of bridge evaluations based on 3D light detection and ranging data,” Struct. Control Health Monit., vol. 20, no. 12, pp. 1397–1409, Dec. 2013. doi: 10.1002/stc.1533
    [33]
    A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint Kalman filter for vision-aided inertial navigation,” in Proc. IEEE Int. Conf. Robotics and Automation, Roma, Italy, 2007, pp. 3565–3572.
    [34]
    L. Armesto, J. Tornero, and M. Vincze, “Fast ego-motion estimation with multi-rate fusion of inertial and vision,” Int. J. Rob. Res., vol. 26, no. 6, pp. 577–589, Jun. 2007. doi: 10.1177/0278364907079283
    [35]
    R. ümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G.2o: A general framework for graph optimization,” in Proc. IEEE Int. Conf. Robotics and Automation, Shanghai, China, 2011, pp. 3607–3613.
    [36]
    L. Yang, B. Li, W. Li, B. Jiang, and J. Z. Xiao, “Semantic metric 3D reconstruction for concrete inspection,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 2018.
    [37]
    B. Limketkai, D. Fox, and L. Liao, “CRF-filters: Discriminative particle filters for sequential state estimation,” in Proc. IEEE Int. Conf. Robotics and Automation, Roma, Italy, 2007, pp. 3142–3147.
    [38]
    A. Kundu, Y. Li, F. Dellaert, F. X. Li, and J. M. Rehg, “Joint semantic segmentation and 3D reconstruction from monocular video,” in Proc. 13th European Conf. Computer Vision, Zurich, Switzerland, 2014, pp. 703–718.
    [39]
    S. C. Gao, M. C. Zhou, Y. R. Wang, J. J. Cheng, H. Yachi, and J. H. Wang, “Dendritic neuron model with effective learning algorithms for classification, approximation, and prediction,” IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 2, pp. 601–614, Feb. 2019. doi: 10.1109/TNNLS.2018.2846646

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(11)  / Tables(5)

    Article Metrics

    Article views (952) PDF downloads(57) Cited by()

    Highlights

    • A high-quality labeled dataset for crack and spalling detection, which is the first publicly available dataset for visual inspection of concrete structures. It has 522 (labeled) crack images and 298 spalling images, and over 10,000 field-collected images from the concrete structure.
    • A robotic inspection system with visual-inertial fusion to obtain pose estimation using an RGB-D camera and an IMU. The visual-inertial system has a 100 Hz pose estimation rate to enable online navigation and 3D mapping.
    • A depth in-painting model that allows depth hole in-painting in an end-to-end approach with real-time performance.
    • A multi-resolution model that adapts to image resolution changes and allows accurate defect detection in the field.

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return