Citation: | Y. Zhang, G. Tian, C.-H. Zhang, C. Hua, W. Ding, and C. Ahn, “Environment modeling for service robots from a task execution perspective,” IEEE/CAA J. Autom. Sinica, 2025. doi: 10.1109/JAS.2025.125168 |
[1] |
Y. Tong, H. Liu, and Z. Zhang, “Advancements in humanoid robots: A comprehensive review and future prospects,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 301–328, 2024. doi: 10.1109/JAS.2023.124140
|
[2] |
L. Kunze, N. Hawes, T. Duckett, M. Hanheide, and T. Krajník, “Artificial intelligence for long-term robot autonomy: A survey,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 4023–4030, Oct. 2018. doi: 10.1109/LRA.2018.2860628
|
[3] |
T. Shen, J. Sun, S. Kong, Y. Wang, J. Li, X. Li, and F.-Y. Wang, “The journey/dao/tao of embodied intelligence: From large models to foundation intelligence and parallel intelligence,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 6, pp. 1313–1316, 2024. doi: 10.1109/JAS.2024.124407
|
[4] |
D. Sirintuna, A. Giammarino, and A. Ajoudani, “An object deformation-agnostic framework for human–robot collaborative transportation,” IEEE Trans. Autom. Sci. Eng., vol. 21, no. 2, pp. 1986–1999, 2024. doi: 10.1109/TASE.2023.3259162
|
[5] |
Y. Zhang, G. Tian, and H. Chen, “Exploring the cognitive process for service task in smart home: A robot service mechanism,” Future Gener. Comput. Syst., vol. 102, pp. 588–602, Jan. 2020. doi: 10.1016/j.future.2019.09.020
|
[6] |
Z. Gao, J. Qin, S. Wang, and Y. Wang, “Boundary gap based reactive navigation in unknown environments,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 2, pp. 468–477, 2021. doi: 10.1109/JAS.2021.1003841
|
[7] |
Q. Liu, X. Cui, Z. Liu, and H. Wang, “Cognitive navigation for intelligent mobile robots: A learning-based approach with topological memory configuration,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 9, pp. 1933–1943, 2024. doi: 10.1109/JAS.2024.124332
|
[8] |
S. Liu, G. Tian, Y. Zhang, M. Zhang, and S. Liu, “Service planning oriented efficient object search: A knowledge-based framework for home service robot,” Expert Syst. Appl., vol. 187, pp. 1–22, 2022.
|
[9] |
C. Wang, L. Meng, S. She, I. M. Mitchell, T. Li, F. Tung, W. Wan, M. Q.-H. Meng, and C. W. de Silva, “Autonomous mobile robot navigation in uneven and unstructured indoor environments,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2017, pp. 109–116.
|
[10] |
N. Sünderhauf, T. T. Pham, Y. Latif, M. Milford, and I. Reid, “Meaningful maps with object-oriented semantic mapping,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2017, pp. 5079–5085.
|
[11] |
Y. Zhang, G. Tian, X. Shao, S. Liu, M. Zhang, and P. Duan, “Building metric-topological map to efficient object search for mobile robot,” IEEE Trans. Ind. Electron., vol. 69, no. 7, pp. 7076–7087, 2022. doi: 10.1109/TIE.2021.3095812
|
[12] |
M. Ersen, E. Oztop, and S. Sariel, “Cognition-enabled robot manipulation in human environments: requirements, recent work, and open problems,” IEEE Robot. Autom. Mag., vol. 24, no. 3, pp. 108–122, Sep. 2017. doi: 10.1109/MRA.2016.2616538
|
[13] |
Y. Zhang, G. Tian, and X. Shao, “Safe and efficient robot manipulation: Task-oriented environment modeling and object pose estimation,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–12, 2021.
|
[14] |
H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: Part I,” IEEE Robot. Autom. Mag., vol. 13, no. 2, pp. 99–110, 2006. doi: 10.1109/MRA.2006.1638022
|
[15] |
T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping: Part II,” IEEE Robot. Autom. Mag., vol. 13, no. 3, pp. 108–117, 2006. doi: 10.1109/MRA.2006.1678144
|
[16] |
J. M. Santos, D. Portugal, and R. P. Rocha, “An evaluation of 2D SLAM techniques available in robot operating system,” in Proc. IEEE Int. Symp. Safety Secur. Rescue Robot., 2013, pp. 1–6.
|
[17] |
S. Huang and G. Dissanayake, “A critique of current developments in simultaneous localization and mapping,” Int. J. Adv. Robot. Syst., vol. 13, no. 5, pp. 1–13, 2016.
|
[18] |
C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Trans. Robot., vol. 32, no. 6, pp. 1309–1332, 2016. doi: 10.1109/TRO.2016.2624754
|
[19] |
I. A. Kazerouni, L. Fitzgerald, G. Dooly, and D. Toal, “A survey of state-of-the-art on visual slam,” Expert Syst. Appl., vol. 205, pp. 1–17, 2022.
|
[20] |
G. Younes, D. Asmar, E. Shammas, and J. Zelek, “Keyframe-based monocular SLAM: Design, survey, and future directions,” Robot. Auton. Syst., vol. 98, pp. 67–88, 2017. doi: 10.1016/j.robot.2017.09.010
|
[21] |
M. R. U. Saputra, A. Markham, and N. Trigoni, “Visual SLAM and structure from motion in dynamic environments: A survey,” ACM Comput. Surv., vol. 51, no. 2, pp. 1–36, 2018.
|
[22] |
H. Pu, J. Luo, G. Wang, T. Huang, and H. Liu, “Visual slam integration with semantic segmentation and deep learning: A review,” IEEE Sensors J., vol. 23, no. 19, pp. 22119–22138, 2023. doi: 10.1109/JSEN.2023.3306371
|
[23] |
J. A. Placed, J. Strader, H. Carrillo, N. Atanasov, V. Indelman, L. Carlone, and J. A. Castellanos, “A survey on active simultaneous localization and mapping: State of the art and new frontiers,” IEEE Trans. Robot., vol. 39, no. 3, pp. 1686–1705, 2023. doi: 10.1109/TRO.2023.3248510
|
[24] |
L. Xia, J. Cui, R. Shen, X. Xu, Y. Gao, and X. Li, “A survey of image semantics-based visual simultaneous localization and mapping: Application-oriented solutions to autonomous navigation of mobile robots,” Int. J. Adv. Robot. Syst., vol. 17, no. 3, pp. 1–17, 2020.
|
[25] |
J. Crespo, J. C. Castillo, O. M. Mozos, and R. Barber, “Semantic information for robot navigation: A survey,” Appl. Sci., vol. 10, no. 2, pp. 1–28, 2020.
|
[26] |
X. Han, S. Li, X. Wang, and W. Zhou, “Semantic mapping for mobile robots in indoor scenes: A survey,” Information, vol. 12, no. 2, pp. 1–14, 2021.
|
[27] |
Y. Wang, Y. Tian, J. Chen, K. Xu, and X. Ding, “A survey of visual slam in dynamic environment: the evolution from geometric to semantic approaches,” IEEE Trans. Instrum. Meas., vol. 73, pp. 1–21, 2024.
|
[28] |
R. Eyvazpour, M. Shoaran, and G. Karimian, “Hardware implementation of slam algorithms: a survey on implementation approaches and platforms,” Artif. Intell. Rev., vol. 56, no. 7, pp. 6187–6239, 2023. doi: 10.1007/s10462-022-10310-5
|
[29] |
B. Al-Tawil, T. Hempel, A. Abdelrahman, and A. Al-Hamadi, “A review of visual slam for robotics: evolution, properties, and future applications,” Frontiers Robot. AI, vol. 11, pp. 1–18, 2024.
|
[30] |
Y. Zhang, H. Yan, D. Zhu, J. Wang, C.-H. Zhang, W. Ding, X. Luo, C. Hua, and M. Q.-H. Meng, “Air-ground collaborative robots for fire and rescue missions: Towards mapping and navigation perspective,” arXiv preprint arXiv: 2412.20699, 2024.
|
[31] |
R. Smith, M. Self, and P. Cheeseman, “Estimating uncertain spatial relationships in robotics,” in Auton. Robot Veh., 1990, pp. 167–193.
|
[32] |
K. Konolige, G. Grisetti, R. Kümmerle, W. Burgard, B. Limketkai, and R. Vincent, “Efficient sparse pose adjustment for 2D mapping,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2010, pp. 22–29.
|
[33] |
G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE Trans. Robot., vol. 23, no. 1, pp. 34–46, 2007. doi: 10.1109/TRO.2006.889486
|
[34] |
A. Yilmaz and H. Temeltas, “Self-adaptive monte carlo method for indoor localization of smart AGVs using LIDAR data,” Robot. Auton. Syst., vol. 122, pp. 1–19, 2019.
|
[35] |
M. Wang, B. Xin, M. Jing, and Y. Qu, “An exploration-enhanced search algorithm for robot indoor source searching,” IEEE Trans. Robot., vol. 40, pp. 4160–4178, 2024. doi: 10.1109/TRO.2024.3454572
|
[36] |
T. Krajník, J. P. Fentanes, M. Hanheide, and T. Duckett, “Persistent localization and life-long mapping in changing environments using the frequency map enhancement,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2016, pp. 4558–4563.
|
[37] |
N. Banerjee, D. Lisin, S. R. Lenser, J. Briggs, R. Baravalle, V. Albanese, Y. Chen, A. Karimian, T. Ramaswamy, P. Pilotti, et al, “Lifelong mapping in the wild: Novel strategies for ensuring map stability and accuracy over time evaluated on thousands of robots,” Robot. Auton. Syst., vol. 164, pp. 1–15, 2023.
|
[38] |
J. Kim and W. Chung, “Localization of a mobile robot using a laser range finder in a glass-walled environment,” IEEE Trans. Ind. Electron., vol. 63, no. 6, pp. 3616–3627, 2016. doi: 10.1109/TIE.2016.2523460
|
[39] |
J. Jiang, R. Miyagusuku, A. Yamashita, and H. Asama, “Online glass confidence map building using laser rangefinder for mobile robots,” Adv. Robot., vol. 34, no. 23, pp. 1506–1521, 2020. doi: 10.1080/01691864.2020.1819873
|
[40] |
M. Awais, “Improved laser-based navigation for mobile robots,” in Proc. Int. Conf. Adv. Robot., 2009, pp. 1–6.
|
[41] |
Y. Zhang, G. Tian, X. Shao, and J. Cheng, “Effective safety strategy for mobile robots based on laser-visual fusion in home environments,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 52, no. 7, pp. 4138–4150, 2022. doi: 10.1109/TSMC.2021.3090443
|
[42] |
T.-j. Lee, C.-h. Kim, and D.-i. D. Cho, “A monocular vision sensor-based efficient slam method for indoor service robots,” IEEE Trans. Ind. Electron., vol. 66, no. 1, pp. 318–328, 2019. doi: 10.1109/TIE.2018.2826471
|
[43] |
S. P. P. da Silva, J. S. Almeida, E. F. Ohata, J. J. Rodrigues, V. H. C. de Albuquerque, and P. P. Reboucas Filho, “Monocular vision aided depth map from rgb images to estimate of localization and support to navigation of mobile robots,” IEEE Sensors J., vol. 20, no. 20, pp. 12040–12048, 2020. doi: 10.1109/JSEN.2020.2964735
|
[44] |
Q. Fu, H. Yu, L. Lai, J. Wang, X. Peng, W. Sun, and M. Sun, “A robust RGB-D SLAM system with points and lines for low texture indoor environments,” IEEE Sensors J., vol. 19, no. 21, pp. 9908–9920, 2019. doi: 10.1109/JSEN.2019.2927405
|
[45] |
R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: a versatile and accurate monocular SLAM system,” IEEE Trans. Robot., vol. 31, no. 5, pp. 1147–1163, 2015. doi: 10.1109/TRO.2015.2463671
|
[46] |
U. Maniscalco, I. Infantino, and A. Manfre, “Robust mobile robot self-localization by soft sensor paradigm,” in Proc. IEEE Int. Symp. Robot. Intell. Sensors, 2017, pp. 19–24.
|
[47] |
F. M. Rico, J. M. G. Hernández, R. Pérez-Rodríguez, J. D. Peña-Narvaez, and A. G. Gómez-Jacinto, “Open source robot localization for nonplanar environments,” J. Field Robot., vol. 41, no. 6, pp. 1922–1939, 2024. doi: 10.1002/rob.22353
|
[48] |
C.-A. Yu, H.-Y. Chen, C.-C. Wang, and L.-C. Fu, “Complex environment localization system using complementary ceiling and ground map information,” Auton. Robot., vol. 47, no. 6, pp. 669–683, 2023. doi: 10.1007/s10514-023-10116-6
|
[49] |
E. Marder-Eppstein, E. Berger, T. Foote, B. Gerkey, and K. Konolige, “The office marathon: Robust navigation in an indoor office environment,” in Proc. IEEE Int. Conf. Robot. Automat., 2010, pp. 300–307.
|
[50] |
W. Zhen, S. Zeng, and S. Soberer, “Robust localization and localizability estimation with a rotating laser scanner,” in Proc. IEEE Int. Conf. Robot. Automat., 2017, pp. 6240–6245.
|
[51] |
C. Premebida, D. R. Faria, and U. Nunes, “Dynamic bayesian network for semantic place classification in mobile robotics,” Auton. Robot., vol. 41, no. 5, pp. 1161–1172, 2017. doi: 10.1007/s10514-016-9600-2
|
[52] |
S. Rosa, A. Patane, C. X. Lu, and N. Trigoni, “Semantic place understanding for human–robot coexistence—toward intelligent workplaces,” IEEE Trans. Human-Mach. Syst., vol. 49, no. 2, pp. 160–170, 2019. doi: 10.1109/THMS.2018.2875079
|
[53] |
A. Rottmann, O. M. Mozos, C. Stachniss, and W. Burgard, “Semantic place classification of indoor environments with mobile robots using boosting,” in Proc. Nat. Conf. Artif. Intell., vol. 5, 2005, pp. 1306–1311.
|
[54] |
V. Balaska, L. Bampis, M. Boudourides, and A. Gasteratos, “Unsupervised semantic clustering and localization for mobile robotics tasks,” Robot. Auton. Syst., vol. 131, pp. 1–10, 2020.
|
[55] |
R. Song, J. Liu, Y. Bi, Wenfu amd Zhang, M. Zhang, C.-H. Zhang, and C. Hua, “Robot localization based on semantic information in dynamic indoor environments with similar layouts,” in Proc. IEEE Int. Conf. Robot. Biomimetics, 2024, pp. 1–6.
|
[56] |
M. Narayana, A. Kolling, L. Nardelli, and P. Fong, “Lifelong update of semantic maps in dynamic environments,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2020, pp. 6164–6171.
|
[57] |
C. Gomez, A. C. Hernandez, R. Barber, and C. Stachniss, “Localization exploiting semantic and metric information in non-static indoor environments,” J. Intell. Robot. Syst., vol. 109, no. 4, p. 86, 2023. doi: 10.1007/s10846-023-02021-y
|
[58] |
H. M. Do, M. Pham, W. Sheng, D. Yang, and M. Liu, “RiSH: a robot-integrated smart home for elderly care,” Robot. Auton. Syst., vol. 101, pp. 74–92, 2018. doi: 10.1016/j.robot.2017.12.008
|
[59] |
H. Taira, I. Rocco, J. Sedlar, M. Okutomi, J. Sivic, T. Pajdla, T. Sattler, and A. Torii, “Is this the right place? geometric-semantic pose verification for indoor visual localization,” in Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 4373–4383.
|
[60] |
J. Biswas and M. M. Veloso, “Localization and navigation of the cobots over long-term deployments,” Int. J. Robot. Res., vol. 32, no. 14, pp. 1679–1694, 2013. doi: 10.1177/0278364913503892
|
[61] |
J. Oršulić, D. Miklić, and Z. Kovačić, “Efficient dense frontier detection for 2-d graph slam based on occupancy grid submaps,” IEEE Robot. Autom. Lett., vol. 4, no. 4, pp. 3569–3576, 2019. doi: 10.1109/LRA.2019.2928203
|
[62] |
Y. Zhang, C.-H. Zhang, and X. Shao, “User preference-aware navigation for mobile robot in domestic via defined virtual area,” J. Netw. Comput. Appl., vol. 173, pp. 1–11, 2021.
|
[63] |
K.-T. Song, S.-Y. Jiang, and S.-Y. Wu, “Safe guidance for a walking-assistant robot using gait estimation and obstacle avoidance,” IEEE/ASME Trans. Mechatronics, vol. 22, no. 5, pp. 2070–2078, 2017. doi: 10.1109/TMECH.2017.2742545
|
[64] |
M. Yani, A. A. Saputra, W. H. Chin, and N. Kubota, “Investigation of obstacle prediction network for improving home-care robot navigation performance,” J. Robot. Mechatronics, vol. 35, no. 2, pp. 510–520, 2023. doi: 10.20965/jrm.2023.p0510
|
[65] |
Y. Zhang, M. Yin, H. Wang, and C. Hua, “Cross-level multi-modal features learning with transformer for rgb-d object recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 12, pp. 7121–7130, 2023. doi: 10.1109/TCSVT.2023.3275814
|
[66] |
R. P. Padhy, F. Xia, S. K. Choudhury, P. K. Sa, and S. Bakshi, “Monocular vision aided autonomous uav navigation in indoor corridor environments,” IEEE Trans. Sustain. Comput., vol. 4, no. 1, pp. 96–108, 2019. doi: 10.1109/TSUSC.2018.2810952
|
[67] |
A. Mora, R. Barber, and L. Moreno, “Leveraging 3d data for whole object shape and reflection aware 2d map building,” IEEE Sensors J., vol. 24, no. 14, pp. 21941–21948, 2024. doi: 10.1109/JSEN.2023.3321936
|
[68] |
D. Murray and C. Jennings, “Stereo vision based mapping and navigation for mobile robots,” in Proc. IEEE Int. Conf. Robot. Automat., vol. 2, 1997, pp. 1694–1699.
|
[69] |
S. Zug, F. Penzlin, A. Dietrich, T. T. Nguyen, and S. Albert, “Are laser scanners replaceable by kinect sensors in robotic applications?” in Proc. IEEE Int. Symp. Robot. Sens. Environ., 2012, pp. 144–149.
|
[70] |
X. Qi, W. Wang, Z. Liao, X. Zhang, D. Yang, and R. Wei, “Object semantic grid mapping with 2d lidar and rgb-d camera for domestic robot navigation,” Appl. Sci., vol. 10, no. 17, p. 5782, 2020. doi: 10.3390/app10175782
|
[71] |
R. C. Luo and C. C. Lai, “Multisensor fusion-based concurrent environment mapping and moving object detection for intelligent service robotics,” IEEE Trans. Ind. Electron., vol. 61, no. 8, pp. 4043–4051, 2014. doi: 10.1109/TIE.2013.2288199
|
[72] |
H. Baltzakis, A. Argyros, and P. Trahanias, “Fusion of laser and visual data for robot motion planning and collision avoidance,” Mach. Vis. Appl., vol. 15, no. 2, pp. 92–100, 2003. doi: 10.1007/s00138-003-0133-2
|
[73] |
B. Lau, C. Sprunk, and W. Burgard, “Efficient grid-based spatial representations for robot navigation in dynamic environments,” Robot. Auton. Syst., vol. 61, no. 10, pp. 1116–1130, 2013. doi: 10.1016/j.robot.2012.08.010
|
[74] |
D. De Gregorio and L. Di Stefano, “Skimap: An efficient mapping framework for robot navigation,” in Proc. IEEE Int. Conf. Robot. Automat., 2017, pp. 2569–2576.
|
[75] |
A. J. Sathyamoorthy, K. Weerakoon, M. Elnoor, M. Russell, J. Pusey, and D. Manocha, “Mim: Indoor and outdoor navigation in complex environments using multi-layer intensity maps,” in Proc. IEEE Int. Conf. Robot. Automat., 2024, pp. 10917–10924.
|
[76] |
S. Silva, N. Verdezoto, D. Paillacho, S. Millan-Norman, and J. D. Hernández, “Online social robot navigation in indoor, large and crowded environments,” in Proc. IEEE Int. Conf. Robot. Automat., 2023, pp. 9749–9756.
|
[77] |
F. Tosi, Y. Zhang, Z. Gong, E. Sandström, S. Mattoccia, M. R. Oswald, and M. Poggi, “How nerfs and 3d gaussian splatting are reshaping slam: a survey,” arXiv preprint arXiv: 2402.13255, vol. 4, 2024.
|
[78] |
M. Kim, O. Kwon, H. Jun, and S. Oh, “Rnr-nav: A real-world visual navigation system using renderable neural radiance maps,” arXiv preprint arXiv: 2410.05621, 2024.
|
[79] |
S. Katragadda, W. Lee, Y. Peng, P. Geneva, C. Chen, C. Guo, M. Li, and G. Huang, “Nerf-vins: A real-time neural radiance field map-based visual-inertial navigation system,” in Proc. IEEE Int. Conf. Robot. Automat., 2024, pp. 10230–10237.
|
[80] |
D. Maier, A. Hornung, and M. Bennewitz, “Real-time navigation in 3D environments based on depth camera data,” in Proc. IEEE-RAS Int. Conf. Humanoid Robots, 2012, pp. 692–697.
|
[81] |
Q. Liu, N. Chen, Z. Liu, and H. Wang, “Toward learning-based visuomotor navigation with neural radiance fields,” IEEE Trans. Industrial Informatics, vol. 20, no. 6, pp. 8907–8916, 2024. doi: 10.1109/TII.2024.3378829
|
[82] |
K. Muravyev and K. Yakovlev, “Evaluation of topological mapping methods in indoor environments,” IEEE Access, vol. 11, pp. 132683–132698, 2023. doi: 10.1109/ACCESS.2023.3335818
|
[83] |
F. Blochliger, M. Fehr, M. Dymczyk, T. Schneider, and R. Siegwart, “Topomap: Topological mapping and navigation based on visual slam maps,” in Proc. IEEE Int. Conf. Robot. Automat., 2018, pp. 3818–3825.
|
[84] |
C. Gomez, M. Fehr, A. Millane, A. C. Hernandez, J. Nieto, R. Barber, and R. Siegwart, “Hybrid topological and 3d dense mapping through autonomous exploration for large indoor environments,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 9673–9679.
|
[85] |
H. Bavle, J. L. Sanchez-Lopez, M. Shaheer, J. Civera, and H. Voos, “Situational graphs for robot navigation in structured indoor environments,” IEEE Robot. Autom. Lett., vol. 7, no. 4, pp. 9107–9114, 2022. doi: 10.1109/LRA.2022.3189785
|
[86] |
K. Zheng and A. Pronobis, “From pixels to buildings: End-to-end probabilistic deep networks for large-scale semantic mapping,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2019, pp. 3511–3518.
|
[87] |
K. Song, W. Liu, G. Chen, X. Xu, and Z. Xiong, “Fht-map: Feature-based hybrid topological map for relocalization and path planning,” IEEE Robot. Autom. Lett., vol. 9, no. 6, pp. 5401–5408, 2024. doi: 10.1109/LRA.2024.3392493
|
[88] |
F. Fraundorfer, C. Engels, and D. Nistér, “Topological mapping, localization and navigation using image collections,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2007, pp. 3872–3877.
|
[89] |
N. Kim, O. Kwon, H. Yoo, Y. Choi, J. Park, and S. Oh, “Topological semantic graph memory for image-goal navigation,” in Proc. Conf. Robot Learn., 2023, pp. 393–402.
|
[90] |
J. Delfin, H. M. Becerra, and G. Arechavaleta, “Humanoid navigation using a visual memory with obstacle avoidance,” Robot. Auton. Syst., vol. 109, pp. 109–124, 2018. doi: 10.1016/j.robot.2018.08.010
|
[91] |
A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: An efficient probabilistic 3d mapping framework based on octrees,” Auton. Robot., vol. 34, no. 3, pp. 189–206, 2013. doi: 10.1007/s10514-012-9321-0
|
[92] |
T. Gervet, Z. Xian, N. Gkanatsios, and K. Fragkiadaki, “Act3d: 3d feature field transformers for multi-task robotic manipulation,” in Proc. Conf. Robot Learn., 2023, pp. 1–11.
|
[93] |
J. Bimbo, A. S. Morgan, and A. M. Dollar, “Force-based simultaneous mapping and object reconstruction for robotic manipulation,” IEEE Robot. Autom. Lett., vol. 7, no. 2, pp. 4749–4756, 2022. doi: 10.1109/LRA.2022.3152244
|
[94] |
M. Murooka, R. Ueda, S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba, “Planning and execution of groping behavior for contact sensor based manipulation in an unknown environment,” in Proc. IEEE Int. Conf. Robot. Automat., 2016, pp. 3955–3962.
|
[95] |
Y. Hara and M. Tomono, “Moving object removal and surface mesh mapping for path planning on 3D terrain,” Adv. Robot., vol. 34, no. 6, pp. 375–387, 2020. doi: 10.1080/01691864.2020.1717375
|
[96] |
T. Ran, L. Yuan, J. Zhang, Z. Wu, and L. He, “Object-oriented semantic slam based on geometric constraints of points and lines,” IEEE Trans. Cogn. Develop. Syst., vol. 15, no. 2, pp. 751–760, 2023. doi: 10.1109/TCDS.2022.3188172
|
[97] |
R.-Z. Qiu, Y. Hu, G. Yang, Y. Song, Y. Fu, J. Ye, J. Mu, R. Yang, N. Atanasov, S. Scherer et al., “Learning generalizable feature fields for mobile manipulation,” arXiv preprint arXiv: 2403.07563, 2024.
|
[98] |
R. Monica and J. Aleotti, “Point cloud projective analysis for part-based grasp planning,” IEEE Robot. Autom. Lett., vol. 5, no. 3, pp. 4695–4702, 2020. doi: 10.1109/LRA.2020.3003883
|
[99] |
R. Terasawa, Y. Ariki, T. Narihira, T. Tsuboi, and K. Nagasaka, “3D-CNN based heuristic guided task-space planner for faster motion planning,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 9548–9554.
|
[100] |
R. Wu, K. Cheng, Y. Zhao, C. Ning, G. Zhan, and H. Dong, “Learning environment-aware affordance for 3d articulated object manipulation under occlusions,” in Proc. Adv. Neural Inf. Process. Syst., 2023, pp. 1–18.
|
[101] |
Y. Zhang, G. Tian, J. Lu, M. Zhang, and S. Zhang, “Efficient dynamic object search in home environment by mobile robot: A priori knowledge-based approach,” IEEE Trans. Veh. Technol., vol. 68, no. 10, pp. 9466–9477, 2019. doi: 10.1109/TVT.2019.2934509
|
[102] |
Y. Wu, Y. Zhang, D. Zhu, Z. Deng, W. Sun, X. Chen, and J. Zhang, “An object slam framework for association, mapping, and high-level tasks,” IEEE Trans. Robot., vol. 39, no. 4, pp. 2912–2932, 2023. doi: 10.1109/TRO.2023.3273180
|
[103] |
S. Lu, H. Chang, E. P. Jing, A. Boularias, and K. Bekris, “Ovir-3d: Open-vocabulary 3d instance retrieval without training on 3d data,” in Proc. Conf. Robot Learn., 2023, pp. 1610–1620.
|
[104] |
R. K. Megalingam, S. Tantravahi, H. S. S. K. Tammana, and H. S. R. Puram, “2d-3d hybrid mapping for path planning in autonomous robots,” Int. J. Intell. Robot. Appl., vol. 7, no. 2, pp. 291–303, 2023. doi: 10.1007/s41315-023-00272-4
|
[105] |
R. Martins, D. Bersan, M. F. Campos, and E. R. Nascimento, “Extending maps with semantic and contextual object information for robot navigation: a learning-based framework using visual and depth cues,” Pattern Recognit., vol. 99, no. 3, pp. 555–569, 2020.
|
[106] |
J.-R. Ruiz-Sarmiento, C. Galindo, and J. Gonzalez-Jimenez, “Building multiversal semantic maps for mobile robot operation,” Knowl.-Based Syst., vol. 119, pp. 257–272, Mar. 2017. doi: 10.1016/j.knosys.2016.12.016
|
[107] |
C. Wang, M. Xia, and M. Q.-H. Meng, “Stable autonomous robotic wheelchair navigation in the environment with slope way,” IEEE Trans. Veh. Technol., vol. 69, no. 10, pp. 10759–10771, 2020. doi: 10.1109/TVT.2020.3009979
|
[108] |
J. Biswas and M. Veloso, “Depth camera based indoor mobile robot localization and navigation,” in Proc. IEEE Int. Conf. Robot. Automat., 2012, pp. 1697–1702.
|
[109] |
A. Hornung, M. Phillips, E. G. Jones, M. Bennewitz, M. Likhachev, and S. Chitta, “Navigation in three-dimensional cluttered environments for mobile manipulation,” in Proc. IEEE Int. Conf. Robot. Automat., 2012, pp. 423–429.
|
[110] |
F. Schmalstieg, D. Honerkamp, T. Welschehold, and A. Valada, “Learning hierarchical interactive multi-object search for mobile manipulation,” IEEE Robot. Autom. Lett., vol. 8, no. 12, pp. 8549–8556, 2023. doi: 10.1109/LRA.2023.3329619
|
[111] |
M. Günther, T. Wiemann, S. Albrecht, and J. Hertzberg, “Model-based furniture recognition for building semantic object maps,” Artif. Intell., vol. 247, pp. 336–351, 2017. doi: 10.1016/j.artint.2014.12.007
|
[112] |
M. Zhang, G. Tian, Y. Cui, Y. Zhang, and Z. Xia, “Hierarchical semantic knowledge-based object search method for household robots,” IEEE Trans. Emerg. Topics Comput. Intell., vol. 8, no. 1, pp. 930–941, 2024. doi: 10.1109/TETCI.2023.3297838
|
[113] |
L. Riazuelo, M. Tenorth, D. Di Marco, M. Salas, D. Gálvez-López, L. Mösenlechner, L. Kunze, M. Beetz, J. D. Tardós, L. Montano, et al, “RoboEarth semantic mapping: A cloud enabled knowledge-based approach,” IEEE Trans. Autom. Sci. Eng., vol. 12, no. 2, pp. 432–443, 2015. doi: 10.1109/TASE.2014.2377791
|
[114] |
J. H. Kwak, J. Lee, J. J. Whang, and S. Jo, “Semantic grasping via a knowledge graph of robotic manipulation: A graph representation learning approach,” IEEE Robot. Autom. Lett., vol. 7, no. 4, pp. 9397–9404, 2022. doi: 10.1109/LRA.2022.3191194
|
[115] |
W. Bi, M. Yin, W. Ren, G. Zhao, Y. Zhang, and C. Hua, “Object fingerprinting-based environment model for service robots: Task-oriented modeling approach,” in Proc. IEEE Int. Conf. Cyber Technol. Autom., Control, Intell. Syst., 2023, pp. 504–509.
|
[116] |
J. G. Rogers and H. I. Christensen, “A conditional random field model for place and object classification,” in Proc. IEEE Int. Conf. Robot. Automat., 2012, pp. 1766–1772.
|
[117] |
L. Kunze and M. Beetz, “Envisioning the qualitative effects of robot manipulation actions using simulation-based projections,” Artif. Intell., vol. 247, pp. 352–380, 2017. doi: 10.1016/j.artint.2014.12.004
|
[118] |
C. Schenck and D. Fox, “Perceiving and reasoning about liquids using fully convolutional networks,” Int. J. Robot. Res., vol. 37, no. 4-5, pp. 452–471, 2018. doi: 10.1177/0278364917734052
|
[119] |
M. C. Gemici and A. Saxena, “Learning haptic representation for manipulating deformable food objects,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2014, pp. 638–645.
|
[120] |
K. Hauser and V. Ng-Thow-Hing, “Randomized multi-modal motion planning for a humanoid robot manipulation task,” Int. J. Robot. Res., vol. 30, no. 6, pp. 678–698, 2011. doi: 10.1177/0278364910386985
|
[121] |
A. Inceoglu, C. Koc, B. O. Kanat, M. Ersen, and S. Sariel, “Continuous visual world modeling for autonomous robot manipulation,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 49, no. 1, pp. 192–205, 2019. doi: 10.1109/TSMC.2017.2787482
|
[122] |
Y. Cui, G. Tian, Z. Jiang, M. Zhang, Y. Gu, and Y. Wang, “An active task cognition method for home service robot using multi-graph attention fusion mechanism,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 6, pp. 4957–4972, 2024. doi: 10.1109/TCSVT.2023.3339292
|
[123] |
Y. Zhang, G. Tian, X. Shao, M. Zhang, and S. Liu, “Semantic grounding for long-term autonomy of mobile robots toward dynamic object search in home environments,” IEEE Trans. Ind. Electron., vol. 70, no. 2, pp. 1655–1665, 2023. doi: 10.1109/TIE.2022.3159913
|
[124] |
M. Thosar, C. A. Mueller, G. Jäger, J. Schleiss, N. Pulugu, R. Mallikarjun Chennaboina, S. V. Rao Jeevangekar, A. Birk, M. Pfingsthorn, and S. Zug, “From multi-modal property dataset to robot-centric conceptual knowledge about household objects,” Frontiers in Robotics and AI, vol. 8, p. 87, 2021.
|
[125] |
J. Hertzberg, H. Jaeger, and F. Schönherr, “Learning to ground fact symbols in behavior-based robots,” in Proc. Eur. Conf. Artif. Intell., vol. 2, 2002, pp. 708–712.
|
[126] |
C. Li, G. Tian, and M. Zhang, “A semantic knowledge-based method for home service robot to grasp an object,” Knowl.-Based Syst., vol. 297, p. 111947, 2024.
|
[127] |
S. Coradeschi and A. Saffiotti, “An introduction to the anchoring problem,” Robot. Auton. Syst., vol. 43, no. 2-3, pp. 85–96, 2003. doi: 10.1016/S0921-8890(03)00021-6
|
[128] |
A. Persson, P. Z. Dos Martires, L. De Raedt, and A. Loutfi, “Semantic relational object tracking,” IEEE Trans. Cogn. Develop. Syst., vol. 12, no. 1, pp. 84–97, 2020. doi: 10.1109/TCDS.2019.2915763
|
[129] |
J. Elfring, S. van den Dries, M. Van De Molengraft, and M. Steinbuch, “Semantic world modeling using probabilistic multiple hypothesis anchoring,” Robot. Auton. Syst., vol. 61, no. 2, pp. 95–105, 2013. doi: 10.1016/j.robot.2012.11.005
|
[130] |
K.-T. Shih and H. H. Chen, “Exploiting perceptual anchoring for color image enhancement,” IEEE Trans. Multimedia, vol. 18, no. 2, pp. 300–310, 2016. doi: 10.1109/TMM.2015.2503918
|
[131] |
P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. Sünderhauf, I. Reid, S. Gould, and A. Van Den Hengel, “Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2018, pp. 3674–3683.
|
[132] |
A. Chikhalikar, A. A. Ravankar, J. V. S. Luces, and Y. Hirata, “Semantic-based multi-object search optimization in service robots using probabilistic and contextual priors,” IEEE Access, vol. 12, pp. 113151–113164, 2024. doi: 10.1109/ACCESS.2024.3444478
|
[133] |
M. Mantelli, F. M. Noori, D. Pittol, R. Maffei, J. Torresen, and M. Kolberg, “Semantic temporal object search system based on heat maps,” J. Intell. Robot. Syst., vol. 106, no. 4, p. 69, 2022. doi: 10.1007/s10846-022-01760-8
|
[134] |
A. Aydemir, M. Göbelbecker, A. Pronobis, K. Sjöö, and P. Jensfelt, “Plan-based object search and exploration using semantic spatial knowledge in the real world.” in Proc. Eur. Conf. Mobile Robots, 2011, pp. 13–18.
|
[135] |
A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox, “6-dof grasping for target-driven object manipulation in clutter,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 6232–6238.
|
[136] |
C. Wang, J. Cheng, J. Wang, X. Li, and M. Q.-H. Meng, “Efficient object search with belief road map using mobile robot,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 3081–3088, 2018. doi: 10.1109/LRA.2018.2849610
|
[137] |
J. Park, T. Yoon, J. Hong, Y. Yu, M. Pan, and S. Choi, “Zero-shot active visual search (zavis): Intelligent object search for robotic assistants,” in Proc. IEEE Int. Conf. Robot. Automat., 2023, pp. 2004–2010.
|
[138] |
Z. Zeng, A. Röfer, and O. C. Jenkins, “Semantic linking maps for active visual object search,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 1984–1990.
|
[139] |
D. Honerkamp, M. Büchner, F. Despinoy, T. Welschehold, and A. Valada, “Language-grounded dynamic scene graphs for interactive object search with mobile manipulation,” IEEE Robot. Autom. Lett., vol. 9, no. 10, pp. 8298–8305, 2024. doi: 10.1109/LRA.2024.3441495
|
[140] |
I. Kostavelis and A. Gasteratos, “Semantic maps from multiple visual cues,” Expert Syst. Appl., vol. 68, pp. 45–57, 2017. doi: 10.1016/j.eswa.2016.10.014
|
[141] |
C. Keroglou, I. Kansizoglou, P. Michailidis, K. M. Oikonomou, I. T. Papapetros, P. Dragkola, I. T. Michailidis, A. Gasteratos, E. B. Kosmatopoulos, and G. C. Sirakoulis, “A survey on technical challenges of assistive robotics for elder people in domestic environments: the aspida concept,” IEEE Trans. Med. Robot. Bionics, vol. 5, no. 2, pp. 196–205, 2023. doi: 10.1109/TMRB.2023.3261342
|
[142] |
S. Levine and D. Shah, “Learning robotic navigation from experience: principles, methods and recent results,” Philos. Trans. Roy. Soc. B, vol. 378, no. 1869, p. 20210447, 2023. doi: 10.1098/rstb.2021.0447
|
[143] |
D. Katare, D. Perino, J. Nurmi, M. Warnier, M. Janssen, and A. Y. Ding, “A survey on approximate edge ai for energy efficient autonomous driving services,” IEEE Commun. Surveys Tuts., vol. 25, no. 4, pp. 2714–2754, 2023. doi: 10.1109/COMST.2023.3302474
|
[144] |
R. Firoozi, J. Tucker, S. Tian, A. Majumdar, J. Sun, W. Liu, Y. Zhu, S. Song, A. Kapoor, K. Hausman et al., “Foundation models in robotics: Applications, challenges, and the future,” Int. J. Robot. Res., 2024, to be published. doi: 10.1177/02783649241281508.
|