IEEE/CAA Journal of Automatica Sinica
Citation: | Y. L. Zhang, W. Liang, M. Z. Yuan, H. S. He, J. D. Tan, and Z. B. Pang, “Monocular visual-inertial and robotic-arm calibration in a unifying framework,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 1, pp. 146–159, Jan. 2022. doi: 10.1109/JAS.2021.1004290 |
[1] |
J. Xiao, D. Xiong, Q. Yu, K. Huang, H. Lu, and Z. Zeng, “A real-time sliding-window-based visual-inertial odometry for MAVs,” IEEE Trans. Ind. Inf., vol. 16, no. 6, pp. 4049–4058, 2020. doi: 10.1109/TII.2019.2959380
|
[2] |
S. Heo, J. Cha, and C. G. Park, “EKF-based visual inertial navigation using sliding window nonlinear optimization,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 7, pp. 2470–2479, 2019. doi: 10.1109/TITS.2018.2866637
|
[3] |
L. Yang, B. Li, W. Li, H. Brand, B. Jiang, and J. Xiao, “Concrete defects inspection and 3d mapping using cityflyer quadrotor robot,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 4, pp. 991–1002, 2020. doi: 10.1109/JAS.2020.1003234
|
[4] |
D. Zou, Y. Wu, L. Pei, H. Ling, and W. Yu, “Structvio: Visual-inertial odometry with structural regularity of man-made environments,” IEEE Trans. Rob., vol. 35, no. 4, pp. 999–1013, 2019. doi: 10.1109/TRO.2019.2915140
|
[5] |
T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Trans. Rob., vol. 34, no. 4, pp. 1004–1020, 2018. doi: 10.1109/TRO.2018.2853729
|
[6] |
K. Eckenhoff, Y. Yang, P. Geneva, and G. Huang, “Tightly-coupled visual-inertial localization and 3-d rigid-body target tracking,” IEEE Rob. Autom. Lett., vol. 4, no. 2, pp. 1541–1548, 2019. doi: 10.1109/LRA.2019.2896472
|
[7] |
Y. Liu and Z. Meng, “Online temporal calibration based on modified projection model for visual-inertial odometry,” IEEE Trans. Instrum. Meas., vol. 69, no. 7, pp. 5197–5207, 2020. doi: 10.1109/TIM.2019.2951863
|
[8] |
G. Panahandeh, M. Jansson, and P. Handel, “Calibration of an IMU-camera cluster using planar mirror reflection and its observability analysis,” IEEE Trans. Instrum. Meas., vol. 64, no. 1, pp. 75–88, 2015. doi: 10.1109/TIM.2014.2329388
|
[9] |
J. H. White and R. W. Beard, “An iterative pose estimation algorithm based on epipolar geometry with application to multi-target tracking,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 4, pp. 942–953, 2020. doi: 10.1109/JAS.2020.1003222
|
[10] |
C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-time visual–inertial odometry,” IEEE Trans. Rob., vol. 33, no. 1, pp. 1–21, 2017. doi: 10.1109/TRO.2016.2597321
|
[11] |
C. Ye, H. Zhang, and L. Jin, “Camera intrinsic parameters estimation by visual inertial odometry for a mobile phone with application to assisted navigation,” IEEE/ASME Trans. Mechatron., vol. 25, no. 4, pp. 1803–1811, 2020.
|
[12] |
F. Santoso, M. A. Garratt, and S. G. Anavatti, “Visualinertial navigation systems for aerial robotics: Sensor fusion and technology,” IEEE Trans. Autom. Sci. Eng., vol. 14, no. 1, pp. 260–275, 2017. doi: 10.1109/TASE.2016.2582752
|
[13] |
Y. Zhang, W. Liang, H. He, and J. Tan, “Wearable heading estimation for motion tracking in health care by adaptive fusion of visualcinertial measurements,” IEEE J. Biomed. Health Inform., vol. 22, no. 6, pp. 1732–1743, 2018. doi: 10.1109/JBHI.2018.2795006
|
[14] |
Y. Wang, S. James, E. K. Stathopoulou, C. BeltrnGonzlez, Y. Konishi, and A. Del Bue, “Autonomous 3-d reconstruction, mapping, and exploration of indoor environments with a robotic arm,” IEEE Rob. Autom. Lett., vol. 4, no. 4, pp. 3340–3347, 2019. doi: 10.1109/LRA.2019.2926676
|
[15] |
M. Di Castro, C. V. Almagro, G. Lunghi, R. Marin, M. Ferre, and A. Masi, “Tracking-based depth estimation of metallic pieces for robotic guidance,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2018, pp. 5503–5508.
|
[16] |
H. Wang, D. Guo, H. Xu, W. Chen, T. Liu, and K. K. Leang, “Eye-in-hand tracking control of a free-floating space manipulator,” IEEE Trans. Aerosp. Electron. Syst., vol. 53, no. 4, pp. 1855–1865, 2017. doi: 10.1109/TAES.2017.2674218
|
[17] |
X. Zhang, Y. Fang, X. Zhang, J. Jiang, and X. Chen, “Dynamic image-based output feedback control for visual servoing of multirotors,” IEEE Trans. Ind. Inf., vol. 16, no. 12, pp. 7624–7636, 2020.
|
[18] |
I. Sa, M. Kamel, M. Burri, M. Bloesch, R. Khanna, M. Popovic, J. Nieto, and R. Siegwart, “Build your own visual-inertial drone: A cost-effective and open-source autonomous drone,” IEEE Rob. Autom. Mag., vol. 25, no. 1, pp. 89–103, 2018. doi: 10.1109/MRA.2017.2771326
|
[19] |
J. Kelly and G. S. Sukhatme, “Visual-inertial sensor fusion: Localization, mapping and sensor-to-sensor selfcalibration,” Int. J. Rob. Res., vol. 30, no. 1, pp. 56–79, 2011. doi: 10.1177/0278364910382802
|
[20] |
H. He, Y. Li, Y. Guan, and J. Tan, “Wearable ego-motion tracking for blind navigation in indoor environments,” IEEE Trans. Autom. Sci. Eng., vol. 12, no. 4, pp. 1181–1190, 2015. doi: 10.1109/TASE.2015.2471175
|
[21] |
Y. Tian, W. R. Hamel, and J. Tan, “Accurate human navigation using wearable monocular visual and inertial sensors,” IEEE Trans. Instrum. Meas., vol. 63, no. 1, pp. 203–213, 2014. doi: 10.1109/TIM.2013.2277514
|
[22] |
X. Li, X. Peng, S. Li, and X. Zhang, “Semi-direct monocular visual odometry based on visual-inertial fusion,” Robot, vol. 42, no. 5, pp. 595–605, 2020.
|
[23] |
W. Huang, H. Liu, and W. Wan, “An online initialization and self-calibration method for stereo visual-inertial odometry,” IEEE Trans. Robot., vol. 36, no. 4, pp. 1153–1170, 2020.
|
[24] |
J. Rehder and R. Siegwart, “Camera/IMU calibration revisited,” IEEE Sens J., vol. 17, no. 11, pp. 3257–3268, 2017. doi: 10.1109/JSEN.2017.2674307
|
[25] |
G. Huang, “Visual-inertial navigation: A concise review,” in Proc. IEEE Int. Conf. Robot. Auto, 2019, pp. 9572–9582.
|
[26] |
Z. Yang and S. Shen, “Monocular visual-inertial state estimation with online initialization and camera-IMU extrinsic calibration,” IEEE Trans. Autom. Sci. Eng., vol. 14, no. 1, pp. 39–51, 2017. doi: 10.1109/TASE.2016.2550621
|
[27] |
Y. Yang, P. Geneva, K. Eckenhoff, and G. Huang, “Degenerate motion analysis for aided ins with online spatial and temporal sensor calibration,” IEEE Rob. Autom. Lett., vol. 4, no. 2, pp. 2070–2077, 2019. doi: 10.1109/LRA.2019.2893803
|
[28] |
A. Martinelli, “Vision and imu data fusion: Closed-form solutions for attitude, speed, absolute scale, and bias determination,” IEEE Trans. Rob., vol. 28, no. 1, pp. 44–60, 2012. doi: 10.1109/TRO.2011.2160468
|
[29] |
X. Zheng, Z. Moratto, M. Li, and A. I. Mourikis, “Photometric patch-based visual-inertial odometry,” in Proc. IEEE Int. Conf. Robot. Autom., 2017, pp. 3264–3271.
|
[30] |
T. Qin and S. Shen, “Online temporal calibration for monocular visual-inertial systems,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2018, pp. 3662–3669.
|
[31] |
J. Lobo and J. Dias, “Relative pose calibration between visual and inertial sensors,” Int. J. Rob. Res., vol. 26, no. 6, pp. 561–575, 2007. doi: 10.1177/0278364907079276
|
[32] |
H. He, Y. Li, and J. Tan, “Rotational coordinate transformation for visual-inertial sensor fusion,” in Proc. Int. Conf. Social Robot, 2016, pp. 431–440.
|
[33] |
P. Furgale, J. Rehder, and R. Siegwart, “Unified temporal and spatial calibration for multi-sensor systems,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2013, pp. 1280–1286.
|
[34] |
X. Luo, H. Wu, H. Yuan, and M. Zhou, “Temporal pattern-aware QoS prediction via biased non-negative latent factorization of tensors,” IEEE Trans. Cybern., vol. 50, no. 5, pp. 1798–1809, 2020. doi: 10.1109/TCYB.2019.2903736
|
[35] |
X. Luo, M. Zhou, S. Li, L. Hu, and M. Shang, “Nonnegativity constrained missing data estimation for highdimensional and sparse matrices from industrial applications,” IEEE Trans. Cybern., vol. 50, no. 5, pp. 1844–1855, 2020. doi: 10.1109/TCYB.2019.2894283
|
[36] |
X. Luo, M. Zhou, S. Li, and M. Shang, “An inherently nonnegative latent factor model for high-dimensional and sparse matrices from industrial applications,” IEEE Trans. Ind. Inf., vol. 14, no. 5, pp. 2011–2022, 2018. doi: 10.1109/TII.2017.2766528
|
[37] |
C. Nissler, Z.-C. Marton, H. Kisner, U. Thomas, and R. Triebel, “A method for hand-eye and camera-tocamera calibration for limited fields of view,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2017, pp. 5868–5873.
|
[38] |
Z. Li, S. Li, and X. Luo, “An overview of calibration technology of industrial robots,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 1, pp. 23–36, 2021. doi: 10.1109/JAS.2020.1003381
|
[39] |
C. Mao, S. Li, Z. Chen, X. Zhang, and C. Li, “Robust kinematic calibration for improving collaboration accuracy of dual-arm manipulators with experimental validation,” Meas., vol. 155, pp. 1–13, 2020.
|
[40] |
J. Wu, Y. Sun, M. Wang, and M. Liu, “Hand-eye calibration: 4-d procrustes analysis approach,” IEEE Trans. Instrum. Meas., vol. 69, no. 6, pp. 2966–2981, 2020. doi: 10.1109/TIM.2019.2930710
|
[41] |
X. Liu, H. Madhusudanan, W. Chen, D. Li, J. Ge, C. Ru, and Y. Sun, “Fast eye-in-hand 3d scanner-robot calibration for low stitching errors,” IEEE Trans. Ind. Elec., pp. 1–10, 2020. DOI: 10.1109/TIE.2020.3009568
|
[42] |
K. Koide and E. Menegatti, “General handceye calibration based on reprojection error minimization,” IEEE Robot. Autom. Lett., vol. 4, no. 2, pp. 1021–1028, 2019. doi: 10.1109/LRA.2019.2893612
|
[43] |
Z. Zhao, “Simultaneous robot-world and hand-eye calibration by the alternative linear programming,” Patt. Recog. Lett., vol. 127, pp. 174–180, 2019. doi: 10.1016/j.patrec.2018.08.023
|
[44] |
J. H. Jung, S. Heo, and C. G. Park, “Observability analysis of IMU intrinsic parameters in stereo visualinertial odometry,” IEEE Trans. Instrum. Meas., pp. 1–12, 2020. DOI: 10.1109/TIM.2020.2985174
|
[45] |
P. Gao, K. Li, T. Song, and Z. Liu, “An accelerometerssize-effect self-calibration method for triaxis rotational inertial navigation system,” IEEE Trans. Ind. Electron., vol. 65, no. 2, pp. 1655–1664, 2018. doi: 10.1109/TIE.2017.2733491
|
[46] |
T. Schneider, M. Li, C. Cadena, J. Nieto, and R. Siegwart, “Observability-aware self-calibration of visual and inertial sensors for ego-motion estimation,” IEEE Sens J., vol. 19, no. 10, pp. 3846–3860, 2019. doi: 10.1109/JSEN.2019.2893809
|
[47] |
F. Tian, W. Feng, Q. Zhang, X. Wang, J. Sun, V. Loia, and Z. Liu, “Active camera relocalization from a single reference image without hand-eye calibration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 12, pp. 2791–2806, 2019. doi: 10.1109/TPAMI.2018.2870646
|
[48] |
Y. Choi, N. Kim, S. Hwang, K. Park, J. S. Yoon, K. An, and I. S. Kweon, “Kaist multi-spectral day/night data set for autonomous and assisted driving,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 3, pp. 934–948, 2018. doi: 10.1109/TITS.2018.2791533
|
[49] |
Y. Zhang, S. Li, J. Zou, and A. H. Khan, “A passivity based approach for kinematic control of manipulators with constraints,” IEEE Trans. Ind. Inf., vol. 16, no. 5, pp. 3029–3038, 2020. doi: 10.1109/TII.2019.2908442
|
[50] |
Y. Gan, J. Duan, and X. Dai, “A calibration method of robot kinematic parameters by drawstring displacement sensor,” Int. J. Adv. Robot. Syst., vol. 16, no. 5, p. 1729881419883072, 2019.
|
[51] |
J. Wang and E. Olson, “Apriltag 2: Efficient and robust fiducial detection,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2016, pp. 4193–4198.
|
[52] |
W. Xu, D. Meng, H. Liu, X. Wang, and B. Liang, “Singularity-free trajectory planning of free-floating multiarm space robots for keeping the base inertially stabilized,” IEEE Trans. Syst.,Man,Cybern.,Syst., vol. 49, no. 12, pp. 2464–2477, 2019. doi: 10.1109/TSMC.2017.2693232
|
[53] |
H. He, Y. Li, and J. Tan, “Relative motion estimation using visual–inertial optical flow,” Auton. Robot., vol. 42, no. 3, pp. 615–629, 2018. doi: 10.1007/s10514-017-9654-9
|