IEEE/CAA Journal of Automatica Sinica
Citation: | H. Zhang, L. Q. Jin, and C. Ye, "An RGB-D Camera Based Visual Positioning System for Assistive Navigation by a Robotic Navigation Aid," IEEE/CAA J. Autom. Sinica, vol. 8, no. 8, pp. 1389-1400, Aug. 2021. doi: 10.1109/JAS.2021.1004084 |
[1] |
R. R A Bourne, S. R. Flaxman, T. Braithwaite, M.V. Cicinelli, et al., “Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: A systematic review and meta-analysis,” Lancet Glob Healthm, vol. 5, no. 9, pp. 888–897, 2017. doi: 10.1016/S2214-109X(17)30293-0
|
[2] |
J. M. Saez, F. Escolano, and A. Penalver, “First steps towards stereo-based 6-DOF SLAM for the visually impaired,” in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops, 2005.
|
[3] |
V. Pradeep, G. Medioni, and J. Weiland, “Robot vision for the visually impaired,” in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops, 2010, pp. 15–22.
|
[4] |
Y. H. Lee and G. Medioni, “RGB-D camera based navigation for the visually impaired,” in Proc. RSS Workshop on RGB-D: Advanced Reasoning With Depth Cameras, 2011, pp. 1–6.
|
[5] |
C. Ye, S. Hong, X. Qian, and W. Wu, “Co-robotic cane: A new robotic navigation aid for the visually impaired,” IEEE Systems,Man,and Cybernetics Magazine, vol. 2, no. 2, pp. 33–42, 2016. doi: 10.1109/MSMC.2015.2501167
|
[6] |
H. Zhang and C.Ye, “An Indoor navigation aid for the visualy impaired,” in Proc. IEEE Int. Conf. Robotics and Biomimetics, 2016, pp. 467–472.
|
[7] |
B. Li, J.P. Munoz, X. Rong, Q. Chen, et al., “Vision-based mobile indoor assistive navigation aid for blind people,” IEEE Trans. Mobile Computing, vol. 18, no. 3, pp. 702–714, 2018.
|
[8] |
H. Zhang, L. Jin, and C. Ye, “A depth-enhanced visual inertial odometry for a robotic navigation aid for blind people,” in Proc. Visual-Inertial Navigation: Challenges and Applications Workshop at 2019 IEEE/RSJ Int. Conf. Intelligent Robots and Systems.
|
[9] |
C. Ye, S. Hong, and A. Tamjidi, “6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features,” IEEE Trans. Automation Science and Engineering, vol. 12, no. 4, pp. 1169–1180, 2015. doi: 10.1109/TASE.2015.2469726
|
[10] |
S. Treuillet, E. Royer, T. Chateau, M. Dhome, et al., “Body mounted vision system for visually impaired outdoor and indoor wayfinding assistance,” in Proc. Conf. Assistive Technologies for People with Vision and Hearing Impairments, 2007.
|
[11] |
K. Wang, W. Wang, and Y. Zhuang, “A map approach for vision-based self-localization of mobile robot,” Acta Automatica Sinica, vol. 34, no. 2, pp. 159–166, 2008.
|
[12] |
D. Ahmetovic, C. Gleason, C. Ruan, K. Kitani, et al., “NavCog: A navigational cognitive assistant for the blind,” in Proc. 18th Int. Conf. Human-Computer Interaction with Mobile Devices and Services, 2016.
|
[13] |
A. Ganz, J. M. Schafer, S. Gandhi, E. Puleo, et al., “PERCEPT indoor navigation system for the blind and visually impaired: Architecture and experimentation,” Int. J. Telemedicine and Applications, 2012. DOI: 10.1155/2012/894869
|
[14] |
A. Ganz, J. M. Schafer, Y. Tao, C. Wilson, et al., “PERCEPT-II: Smartphone based indoor navigation system for the blind,” in Proc. 36th Annu. Int. Conf. IEEE Engineering in Medicine and Biology Society, 2014, pp. 3662–3665.
|
[15] |
H. Zhang, L. Jin, H. Zhang, and C. Ye, “A comparative analysis of visual-inertial SLAM for assisted wayfinding of the visually impaired,” in Proc. IEEE Winter Conf. Applications of Computer Vision, 2019, pp. 210–217.
|
[16] |
S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, et al., “Keyframe-based visual-inertial SLAM using nonlinear optimization,” Int. J. Robotics Research, vol. 34, no. 3, pp. 314–334, 2015. doi: 10.1177/0278364914554813
|
[17] |
T. Qin, P. Li, and S. Shen, “VINS-MONO: A robust and versatile monocular visual-inertial state estimator,” IEEE Trans. Robotics, vol. 34, no. 4, pp. 1004–1020, 2018. doi: 10.1109/TRO.2018.2853729
|
[18] |
R. Mur-Artal and J. D. Tardós, “Visual-inertial monocular SLAM with map reuse,” IEEE Robotics and Automation Letters, vol. 2.2, pp. 796–803, 2017.
|
[19] |
S. Weiss, M. W. Achtelik, S. Lynen, M. Chli, et al., “Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments,” in Proc. IEEE Int. Conf. Robotics and Automation, 2012, pp. 957–964.
|
[20] |
V. Indelman, S. Williams, M. Kaess, and F. Dellaert, “Information fusion in navigation systems via factor graph based incremental smoothing,” Robotics and Autonomous Systems, vol. 61, no. 8, pp. 721–738, 2013. doi: 10.1016/j.robot.2013.05.001
|
[21] |
W. Zheng, F. Zhou, and Z. Wang, “Robust and accurate monocular visual navigation combining IMU for a quadrotor,” IEEE/CAA J. Autom. Sinica, vol. 2, no. 1, pp. 33–44, 2015.
|
[22] |
A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint Kalman filter for vision-aided inertial navigation,” in Proc. IEEE Int. Conf. Robotics and Automation, 2007.
|
[23] |
C. Campos, R. Elvira, J. J. G. Rodriguez, J. M. Montiel, et al., “ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM,” arXiv preprint arXiv: 2007.11898, 2020.
|
[24] |
C. Campos, J. M. Montiel, and J. D. Tardós, “Inertial-only optimization for visual-inertial initialization,” arXiv preprint arXiv: 2003.05766, 2020.
|
[25] |
T. Qin and S. Shen, “Robust initialization of monocular visual-inertial estimation on aerial robots,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2017.
|
[26] |
J. Delmerico and D. Scaramuzza, “A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots,” in Proc. IEEE Int. Conf. Robotics and Automation, 2018.
|
[27] |
K. Sun, K. Mohta, B. Pfrommer, M. Watterson, et al., “Robust stereo visual inertial odometry for fast autonomous flight,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 965–972, 2018.
|
[28] |
N. Brunetto, S. Salti, N. Fioraio, T. Cavallari, et al., “Fusion of inertial and visual measurements for RGB-D SLAM on mobile devices,” in Proc. IEEE Int. Conf. Computer Vision Workshops, 2015, pp. 1–9.
|
[29] |
Y. Ling, H. Liu, X. Zhu, J. Jiang, et al., “RGB-D inertial odometry for indoor robot via keyframe-based nonlinear optimization,” in Proc. IEEE Int. Conf. Mechatronics and Automation, 2018, pp. 973–979.
|
[30] |
Z. Shan, R. Li, and S. Schwertfeger, “RGBD-inertial trajectory estimation and mapping for ground robots,” Sensors, vol. 19, no. 10, Article No. 2251, 2019. doi: 10.3390/s19102251
|
[31] |
J. Shi, “Good features to track,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1994, pp. 593–600.
|
[32] |
V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An accurate O(n) solution to the PnP problem,” Int. J. Computer Vision, vol. 81, no. 2, Article No. 155, 2009. doi: 10.1007/s11263-008-0152-6
|
[33] |
F. Boniardi, T. Caselitz, R. Kummerle, and W. Burgard, “Robust LiDAR-based localization in architectural floor plans,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2017, pp. 3318–3324.
|
[34] |
F. Boniardi, T. Caselitz, R. Kummerle, and W. Burgard, “A pose graph-based localization system for long-term navigation in CAD floor plans,” Robotics and Autonomous Systems, vol. 112, pp. 84–97, 2019. doi: 10.1016/j.robot.2018.11.003
|
[35] |
Y. Watanabe, K. R. Amaro, B. llhan, T. Kinoshita, et al., “Robust localization with architectural floor plans and depth camera,” in Proc. IEEE/SICE Int. Symp. System Integration, 2020.
|
[36] |
A. Segal, D. Hhnel, and S. Thrun, “Generalized-ICP,” in Proc. Robotics: Science and Systems, 2009.
|
[37] |
S. Seifzadeh, B. Khaleghi, and F. Karray, “Distributed soft-data-constrained multi-model particle filter,” IEEE Trans. Cybernetics, vol. 45, no. 3, pp. 384–394, 2014.
|
[38] |
W. Winterhalter, F. Fleckenstein, B. Steder, L. Spinello, et al., “Accurate indoor localization for RGB-D smartphones and tablets given 2D floor plans,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2015, pp. 3138–3143.
|
[39] |
UP Bridge the Gap. [Online]. Available: http://www.up-board.org/up
|
[40] |
H. Zhang and C. Ye, “Human-robot interaction for assisted wayfinding of a robotic navigation aid for the blind,” in Proc. IEEE Int. Conf. Human System Interaction, 2019, pp. 137–142.
|
[41] |
R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, New York, 2004.
|
[42] |
B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. Imaging Understanding Workshop, 1981, pp. 121–130.
|
[43] |
Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. doi: 10.1109/34.888718
|
[44] |
H. Zhang and C. Ye, “Plane-aided visual-inertial odometry for 6-DOF pose estimation of a robotic navigation aid,” IEEE Access, vol. 8, pp. 90042–90051, 2020. doi: 10.1109/ACCESS.2020.2994299
|
[45] |
H. Zhang and C. Ye, “An indoor wayfinding system based on geometric features aided graph SLAM for the visually impaired,” IEEE Trans. Neural Systems and Rehabilitation Engineering, vol. 25, no. 9, pp. 1592–1604, 2017. doi: 10.1109/TNSRE.2017.2682265
|
[46] |
C. Ye, “T-transformation: A new traversability analysis method for terrain navigation,” in Proc. SPIE Defense and Security Symp., 2004.
|
[47] |
C. Ye, “A method for mobile robot obstacle negotiation,” Int. J. Intelligent Control and Systems, vol. 10, no. 3, pp. 188–200, 2005.
|
[48] |
O. Wulf, K. O. Arras, H. I. Christensen, and B. Wagner, “2D mapping of cluttered indoor environments by means of 3D perception,” in Proc. IEEE Int. Conf. Robotics and Automation, 2004.
|
[49] |
G. Giorgio, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE Trans. Robotics, vol. 23, no. 1, pp. 34–46, 2007. doi: 10.1109/TRO.2006.889486
|