A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 10 Issue 10
Oct.  2023

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
T. Y. K. Zhang, J. X. Zhan, J. M. Shi, J. M. Xin, and  N. N. Zheng,  “Human-like decision-making of autonomous vehicles in dynamic traffic scenarios,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 10, pp. 1905–1917, Oct. 2023. doi: 10.1109/JAS.2023.123696
Citation: T. Y. K. Zhang, J. X. Zhan, J. M. Shi, J. M. Xin, and  N. N. Zheng,  “Human-like decision-making of autonomous vehicles in dynamic traffic scenarios,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 10, pp. 1905–1917, Oct. 2023. doi: 10.1109/JAS.2023.123696

Human-Like Decision-Making of Autonomous Vehicles in Dynamic Traffic Scenarios

doi: 10.1109/JAS.2023.123696
Funds:  This work was supported by the National Key R&D Program of China (2022YFB2502900) and the National Natural Science Foundation of China (62088102, 61790563)
More Information
  • With the maturation of autonomous driving technology, the use of autonomous vehicles in a socially acceptable manner has become a growing demand of the public. Human-like autonomous driving is expected due to the impact of the differences between autonomous vehicles and human drivers on safety. Although human-like decision-making has become a research hotspot, a unified theory has not yet been formed, and there are significant differences in the implementation and performance of existing methods. This paper provides a comprehensive overview of human-like decision-making for autonomous vehicles. The following issues are discussed: 1) The intelligence level of most autonomous driving decision-making algorithms; 2) The driving datasets and simulation platforms for testing and verifying human-like decision-making; 3) The evaluation metrics of human-likeness; personalized driving; the application of decision-making in real traffic scenarios; and 4) The potential research direction of human-like driving. These research results are significant for creating interpretable human-like driving models and applying them in dynamic traffic scenarios. In the future, the combination of intuitive logical reasoning and hierarchical structure will be an important topic for further research. It is expected to meet the needs of human-like driving.

     

  • loading
  • [1]
    Y. Ma, Z. Wang, H. Yang, and L. Yang, “Artificial intelligence applications in the development of autonomous vehicles: A survey,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 2, pp. 315–329, 2020. doi: 10.1109/JAS.2020.1003021
    [2]
    Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems, SAE Int. Standard, vol. 3016, pp. 1–12, 2014.
    [3]
    S. Hecker, D. Dai, and L. V. Gool, “Learning accurate, comfortable and human-like driving,” arXiv preprint arXiv: 1903.10995, 2019.
    [4]
    F. E. Ritter, F. Tehranchi, and J. D. Oury, “Act-R: A cognitive architecture for modeling cognition,” Wiley Interdisciplinary Reviews: Cognitive Science, vol. 10, no. 3, p. e1488, 2019.
    [5]
    K. Sama, Y. Morales, H. Liu, N. Akai, A. Carballo, E. Takeuchi, and K. Takeda, “Extracting human-like driving behaviors from expert driver data using deep learning,” IEEE Trans. Vehicular Technology, vol. 69, no. 9, pp. 9315–9329, 2020. doi: 10.1109/TVT.2020.2980197
    [6]
    S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, et al., “Stanley: The robot that won the darpa grand challenge,” J. Field Robotics, vol. 23, no. 9, pp. 661–692, 2006. doi: 10.1002/rob.20147
    [7]
    M. Montemerlo, J. Becker, S. Bhat, H. Dahlkamp, D. Dolgov, S. Ettinger, D. Haehnel, T. Hilden, G. Hoffmann, B. Huhnke, et al., “Junior: The stanford entry in the urban challenge,” J. Field Robotics, vol. 25, no. 9, pp. 569–597, 2008. doi: 10.1002/rob.20258
    [8]
    A. Tampuu, T. Matiisen, M. Semikin, D. Fishman, and N. Muhammad, “A survey of end-to-end driving: Architectures and training methods,” IEEE Trans. Neural Networks and Learning Systems, vol. 33, no. 4, pp. 1364–1384, 2022. doi: 10.1109/TNNLS.2020.3043505
    [9]
    J. J. Rolison, S. Regev, S. Moutari, and A. Feeney, “What are the factors that contribute to road accidents? An assessment of law enforcement views, ordinary drivers’ opinions, and road accident records,” Accident Analysis &Prevention, vol. 115, pp. 11–24, 2018.
    [10]
    K. Bucsuházy, E. Matuchová, R. Zvala, P. Moravcová, M. Kostíková, and R. Mikulec, “Human factors contributing to the road traffic accident occurrence,” Transportation Research Procedia, vol. 45, pp. 555–561, 2020. doi: 10.1016/j.trpro.2020.03.057
    [11]
    A. Vetro, A. Santangelo, E. Beretta, and J. C. De Martin, “AI: From rational agents to socially responsible agents,” Digital Policy,Regulation and Governance, vol. 21, no. 3, pp. 291–304, 2019. doi: 10.1108/DPRG-08-2018-0049
    [12]
    N. Zheng, Z. Liu, P. Ren, Y. Ma, S. Chen, S. Yu, J. Xue, B. Chen, and F. Wang, “Hybrid-augmented intelligence: Collaboration and cognition,” Frontiers of Information Technology &Electronic Engineering, vol. 18, no. 2, pp. 153–179, 2017.
    [13]
    W. Wang, X. Na, D. Cao, J. Gong, J. Xi, Y. Xing, and F.-Y. Wang, “Decision-making in driver-automation shared control: A review and perspectives,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 5, pp. 1289–1307, 2020.
    [14]
    M. Schiementz, K. Groh, S. Wagner, and T. Kühbeck, “Pegasus-test case variation and execution,” in Proc. PEGASUS Symp., 2019.
    [15]
    J. E. Laird, “Introduction to SOAR,” 2022.
    [16]
    L. E. B. da Silva, I. Elnabarawy, and D. C. Wunsch II, “A survey of adaptive resonance theory neural network models for engineering applications,” Neural Networks, vol. 120, pp. 167–203, 2019. doi: 10.1016/j.neunet.2019.09.012
    [17]
    M. Cina and A. B. Rad, “Categorized review of drive simulators and driver behavior analysis focusing on ACT-R architecture in autonomous vehicles,” Sustainable Energy Technologies and Assessments, vol. 56, p. 103044, 2023. doi: 10.1016/j.seta.2023.103044
    [18]
    A. Li, W. Zhao, X. Wang, and X. Qiu, “ACT-R cognitive model based trajectory planning method study for electric vehicle’s active obstacle avoidance system,” Energies, vol. 1, no. 1, p. 75, 2018.
    [19]
    J. E. Laird, C. Lebiere, and P. S. Rosenbloom, “A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics,” AI Magazine, vol. 38, no. 4, pp. 13–26, 2017. doi: 10.1609/aimag.v38i4.2744
    [20]
    C. L. Dancy, “A hybrid cognitive architecture with primal affect and physiology,” IEEE Trans. Affective Computing, vol. 12, no. 2, pp. 318–328, 2019.
    [21]
    S. Ko, Y. Zhang, and M. Jeon, “Modeling the effects of auditory display takeover requests on drivers’ behavior in autonomous vehicles,” in Proc. 11th Int. Conf. Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, 2019, pp. 392–398.
    [22]
    Y. Zhang, C. Wu, C. Qiao, A. Sadek, and K. F. Hulme, “A cognitive computational model of driver warning response performance in connected vehicle systems,” IEEE Trans. Intelligent Transportation Systems, vol. 23, no. 9, pp. 14790–14805, 2022. doi: 10.1109/TITS.2021.3134058
    [23]
    G. Bagschik, T. Menzel, and M. Maurer, “Ontology based scene creation for the development of automated vehicles,” in Proc. IEEE Intelligent Vehicles Symposium, 2018, pp. 1813–1820.
    [24]
    F. Hauer, T. Schmidt, B. Holzmüller, and A. Pretschner, “Did we test all scenarios for automated and autonomous driving systems?” in Proc. IEEE Intelligent Transportation Systems Conf., 2019, pp. 2950–2955.
    [25]
    D. U. Ozsahin, B. Uzun, I. Ozsahin, M. T. Mustapha, and M. S. Musa, “Fuzzy logic in medicine,” in Proc. Biomedical Signal Processing and Artificial Intelligence in Healthcare, 2020, pp. 153–182.
    [26]
    A. Indahingwati, M. Barid, N. Wajdi, D. Susilo, N. Kurniasih, and R. Rahim, “Comparison analysis of topsis and fuzzy logic methods on fertilizer selection,” Int. J. Eng. Technol, vol. 7, no. 2.3, pp. 109–114, 2018. doi: 10.14419/ijet.v7i2.3.12630
    [27]
    R. Jafari, M. A. Contreras, W. Yu, and A. Gegov, “Applications of fuzzy logic, artificial neural network and neuro-fuzzy in industrial engineering,” in Proc. Latin American Symp. Industrial and Robotic Systems, 2019, pp. 9–14.
    [28]
    J. M. Garibaldi, “The need for fuzzy AI,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 3, pp. 610–622, 2019. doi: 10.1109/JAS.2019.1911465
    [29]
    X. Zhao, H. Mo, K. Yan, and L. Li, “Type-2 fuzzy control for driving state and behavioral decisions of unmanned vehicle,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 1, pp. 178–186, 2019.
    [30]
    C. Chen, X. Liu, H.-H. Chen, M. Li, and L. Zhao, “A rear-end collision risk evaluation and control scheme using a bayesian network model,” IEEE Trans. Intelligent Transportation Systems, vol. 20, no. 1, pp. 264–284, 2018.
    [31]
    T. Makaba, W. Doorsamy, and B. S. Paul, “Bayesian network-based framework for cost-implication assessment of road traffic collisions,” Int. J. Intelligent Transportation Systems Research, vol. 19, pp. 240–253, 2021. doi: 10.1007/s13177-020-00242-1
    [32]
    S. Noh and K. An, “Decision-making framework for automated driving in highway environments,” IEEE Trans. Intelligent Transportation Systems, vol. 19, no. 1, pp. 58–71, 2017.
    [33]
    C. Hubmann, M. Becker, D. Althoff, D. Lenz, and C. Stiller, “Decision making for autonomous driving considering interaction and uncertain prediction of surrounding vehicles,” in Proc. IEEE Intelligent Vehicles Symp., 2017, pp. 1671–1678.
    [34]
    C. Zhang, S. Ma, M. Wang, G. Hinz, and A. Knoll, “Efficient pomdp behavior planning for autonomous driving in dense urban environments using multi-step occupancy grid maps,” in Proc. IEEE 25th Int. Conf. Intelligent Transportation Systems, 2022, pp. 2722–2729.
    [35]
    Z. Qiao, K. Muelling, J. Dolan, P. Palanisamy, and P. Mudalige, “Pomdp and hierarchical options MDP with continuous actions for autonomous driving at intersections,” in Proc. 21st Int. Conf. Intelligent Transportation Systems, 2018, pp. 2377–2382.
    [36]
    A. G. Cunningham, E. Galceran, D. Mehta, G. Ferrer, R. M. Eustice, and E. Olson, “MPDM: Multi-policy decision-making from autonomous driving to social robot navigation,” Control Strategies for Advanced Driver Assistance Systems and Autonomous Driving Functions: Development, Testing and Verification, pp. 201–223, 2019.
    [37]
    T. Nishi, P. Doshi, and D. Prokhorov, “Merging in congested freeway traffic using multipolicy decision making and passive actor-critic learning,” IEEE Trans. Intelligent Vehicles, vol. 4, no. 2, pp. 287–297, 2019. doi: 10.1109/TIV.2019.2904417
    [38]
    H. Yu, H. E. Tseng, and R. Langari, “A human-like game theory-based controller for automatic lane changing,” Transportation Research Part C: Emerging Technologies, vol. 88, pp. 140–158, 2018. doi: 10.1016/j.trc.2018.01.016
    [39]
    X. Na and D. J. Cole, “Modelling of a human driver’s interaction with vehicle automated steering using cooperative game theory,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 5, pp. 1095–1107, 2019. doi: 10.1109/JAS.2019.1911675
    [40]
    A. Ji and D. Levinson, “A review of game theory models of lane changing,” Transportmetrica A: Transport Science, vol. 16, no. 3, pp. 1628–1647, 2020. doi: 10.1080/23249935.2020.1770368
    [41]
    B. R. Kiran, I. Sobh, V. Talpaert, P. Mannion, A. A. A. Sallab, S. Yogamani, and P. Pérez, “Deep reinforcement learning for autonomous driving: A survey,” IEEE Trans. Intelligent Transportation Systems, vol. 23, no. 6, pp. 4909–4926, 2022. doi: 10.1109/TITS.2021.3054625
    [42]
    V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” 2013. [Online]. Available: https://arxiv.org/abs/1312.5602.
    [43]
    T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” 2015. [Online]. Available: https://arxiv.org/abs/1509.02971.
    [44]
    A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J.-M. Allen, V.-D. Lam, A. Bewley, and A. Shah, “Learning to drive in a day,” in Proc. Int. Conf. Robotics and Automation, 2019, pp. 8248–8254.
    [45]
    Z. Huang, J. Zhang, R. Tian, and Y. Zhang, “End-to-end autonomous driving decision based on deep reinforcement learning,” in Proc. 5th Int. Conf. Control, Automation and Robotics, 2019, pp. 658–662.
    [46]
    T. M. Moerland, J. Broekens, A. Plaat, C. M. Jonker, et al., “Modelbased reinforcement learning: A survey,” Foundations and Trends® in Machine Learning, vol. 16, no. 1, pp. 1–118, 2023.
    [47]
    A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” Electronic Imaging, vol. 2017, no. 19, pp. 70–76, 2017.
    [48]
    D. Silver, S. Singh, D. Precup, and R. S. Sutton, “Reward is enough,” Artificial Intelligence, vol. 299, p. 103535, 2021. doi: 10.1016/j.artint.2021.103535
    [49]
    M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” 2016. [Online]. Available: https://arxiv.org/abs/1604.07316.
    [50]
    H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 2174–2182.
    [51]
    F. Codevilla, M. Miiller, A. López, V. Koltun, and A. Dosovitskiy, “End-to-end driving via conditional imitation learning,” in Proc. IEEE Int. Conf. Robotics and Automation, 2018, pp. 1–9.
    [52]
    D. Kishikawa and S. Arai, “Comfortable driving by using deep inverse reinforcement learning,” in Proc. IEEE Int. Conf. Agents, 2019, pp. 38–43.
    [53]
    Z. Wu, L. Sun, W. Zhan, C. Yang, and M. Tomizuka, “Efficient sampling-based maximum entropy inverse reinforcement learning with application to autonomous driving,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5355–5362, 2020. doi: 10.1109/LRA.2020.3005126
    [54]
    Y. Jiang, W. Deng, J. Wang, and B. Zhu, “Studies on drivers’ driving styles based on inverse reinforcement learning,” SAE Technical Paper,Tech. Rep., 2018.
    [55]
    K. Lee, D. Isele, E. A. Theodorou, and S. Bae, “Spatiotemporal costmap inference for mpc via deep inverse reinforcement learning,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3194–3201, 2022. doi: 10.1109/LRA.2022.3146635
    [56]
    J. Ho and S. Ermon, “Generative adversarial imitation learning,” in Advances in Neural Information Processing Systems, vol. 29, 2016.
    [57]
    A. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer, “Imitating driver behavior with generative adversarial networks,” in Proc. IEEE Intelligent Vehicles Symp., 2017, pp. 204–211.
    [58]
    R. P. Bhattacharyya, D. J. Phillips, B. Wulfe, J. Morton, A. Kuefler, and M. J. Kochenderfer, “Multi-agent imitation learning for driving simulation,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2018, pp. 1534–1539.
    [59]
    R. P. Bhattacharyya, D. J. Phillips, C. Liu, J. K. Gupta, K. DriggsCampbell, and M. J. Kochenderfer, “Simulating emergent properties of human driving behavior using multi-agent reward augmented imitation learning,” in Proc. Int. Conf. Robotics and Automation, 2019, pp. 789–795.
    [60]
    S.-H. Bae, S.-H. Joo, J.-W. Pyo, J.-S. Yoon, K. Lee, and T.-Y. Kuc, “Finite state machine based vehicle system for autonomous driving in urban environments,” in Proc. 20th Int. Conf. Control, Automation and Systems, 2020, pp. 1181– 1186.
    [61]
    N. D. Van, M. Sualeh, D. Kim, and G.-W. Kim, “A hierarchical control system for autonomous driving towards urban challenges,” Applied Sciences, vol. 10, no. 10, p. 3543, 2020. doi: 10.3390/app10103543
    [62]
    Y. Hu, L. Yan, J. Zhan, F. Yan, Z. Yin, F. Peng, and Y. Wu, “Decisionmaking system based on finite state machine for low-speed autonomous vehicles in the park,” in Proc. IEEE Int. Conf. Realtime Computing and Robotics, 2022, pp. 721–726.
    [63]
    S. Coskun and R. Langari, “Predictive fuzzy Markov decision strategy for autonomous driving in highways,” in Proc. IEEE Conf. Control Technology and Applications, 2018, pp. 1032–1039.
    [64]
    L. Claussmann, M. O’Brien, S. Glaser, H. Najjaran, and D. Gruyer, “Multi-criteria decision making for autonomous vehicles using fuzzy dempster-shafer reasoning,” in Proc. IEEE Intelligent Vehicles Symposium, 2018, pp. 2195–2202.
    [65]
    Q. Wu, S. Cheng, L. Li, F. Yang, L. J. Meng, Z. X. Fan, and H. W. Liang, “A fuzzy-inference-based reinforcement learning method of overtaking decision making for automated vehicles,” Proc. Institution of Mechanical Engineers,Part D: J. Automobile Engineering, vol. 236, no. 1, pp. 75–83, 2022. doi: 10.1177/09544070211018099
    [66]
    US Highway 101 Dataset, Next Generation Simulation Program (NGSIM), 2007. [Online]. Available: https://www.fhwa.dot.gov/publications/research/operations/07030/index.cfm.
    [67]
    R. Krajewski, J. Bock, L. Kloeker, and L. Eckstein, “The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems,” in Proc. 21st Int. Conf. Intelligent Transportation Systems, 2018, pp. 2118–2125.
    [68]
    J. Houston, G. Zuidhof, L. Bergamini, Y. Ye, A. Jain, S. Omari, V. Iglovikov, and P. Ondruska, “One thousand and one hours: Self-driving motion prediction dataset.” [Online]. Available: https://level-5.global/level5/data/.
    [69]
    W. Zhan, L. Sun, D. Wang, H. Shi, A. Clausse, M. Naumann, J. Kummerle, H. Konigshof, C. Stiller, A. de La Fortelle, and M. Tomizuka, “Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps,” 2019. [Online]. Available: https://arxiv.org/abs/1910.03088.
    [70]
    V. Ramanishka, Y.-T. Chen, T. Misu, and K. Saenko, “Toward driving scene understanding: A dataset for learning driver behavior and causal reasoning,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 7699–7707.
    [71]
    M.-F. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan, et al., “Argoverse: 3d tracking and forecasting with rich maps,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 8748–8757.
    [72]
    M. Althoff, M. Koschi, and S. Manzinger, “Commonroad: Composable benchmarks for motion planning on roads,” in Proc. IEEE Intelligent Vehicles Symposium, 2017, pp. 719–726.
    [73]
    H. Caesar, J. Kabzan, K. S. Tan, W. K. Fong, E. Wolff, A. Lang, L. Fletcher, O. Beijbom, and S. Omari, “Nuplan: A closed-loop ML-based planning benchmark for autonomous vehicles,” 2021. [Online]. Available: https://arxiv.org/abs/2106.11810.
    [74]
    B. Wymann, E. Espié, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner, “Torcs, the open racing car simulator,” 2000. [Online]. Available: http://torcs.sourceforge.net.
    [75]
    R. Bhattacharyya, B. Wulfe, D. Phillips, A. Kuefler, J. Morton, R. Senanayake, and M. Kochenderfer, “Modeling human driving behavior through generative adversarial imitation learning,” 2020. [Online]. Available: https://arxiv.org/abs/2006.06412.
    [76]
    A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” in Proc. Conf. Robot Learning, 2017, pp. 1–16.
    [77]
    Mechanical Simulation Corporation. “Carsim software,” 1997. [Online]. Available: https://www.carsim.com/.
    [78]
    Y. Li, H. Deng, X. Xu, and W. Wang, “Modelling and testing of inwheel motor drive intelligent electric vehicles based on co-simulation with Carsim/Simulink,” IET Intelligent Transport Systems, vol. 13, no. 1, pp. 115–123, 2018.
    [79]
    N. Koenig and A. Howard, “Design and use paradigms for Gazebo, an open-source multi-robot simulator,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2004, vol. 3, pp. 2149–2154.
    [80]
    M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng, et al., “ROS: An open-source robot operating system,” in Proc. ICRA Workshop on Open Source Software, 2009.
    [81]
    Y. Pan, C.-A. Cheng, K. Saigol, K. Lee, X. Yan, E. Theodorou, and B. Boots, “Agile autonomous driving using end-to-end deep imitation learning,” 2017. [Online]. Available: https://arxiv.org/abs/1709.07174.
    [82]
    W. Li, D. Wolinski, and M. C. Lin, “Adaps: Autonomous driving via principled simulations,” in Proc. Int. Conf. Robotics and Automation, 2019, pp. 7625–7631.
    [83]
    J. Huang, Y. Chen, X. Peng, L. Hu, and D. Cao, “Study on the driving style adaptive vehicle longitudinal control strategy,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 4, pp. 1107–1115, 2020. doi: 10.1109/JAS.2020.1003261
    [84]
    L. Li, K. Ota, and M. Dong, “Humanlike driving: Empirical decisionmaking system for autonomous vehicles,” IEEE Trans. Vehicular Technology, vol. 67, no. 8, pp. 6814–6823, 2018. doi: 10.1109/TVT.2018.2822762
    [85]
    S. Chen, S. Zhang, J. Shang, B. Chen, and N. Zheng, “Brain-inspired cognitive model with attention for self-driving cars,” IEEE Trans. Cognitive and Developmental Systems, vol. 11, no. 1, pp. 13–25, 2017.
    [86]
    A. Riegler, A. Riener, and C. Holzmann, “A research agenda for mixed reality in automated vehicles,” in Proc. 19th Int. Conf. Mobile and Ubiquitous Multimedia, 2020, pp. 119–131.
    [87]
    A. Pillai, “Virtual reality based study to analyse pedestrian attitude towards autonomous vehicles,” M.S. thesis, Aalto University. School of Science, 2017. [Online]. Available: http://urn.fi/URN:NBN:fi:aalto-201710307409.
    [88]
    D. Goedicke, A. W. Bremers, S. Lee, F. Bu, H. Yasuda, and W. Ju, “Xroom: Mixed reality driving simulation with real cars for research and design,” in Proc. CHI Conf. Human Factors in Computing Systems, 2022, pp. 1–13.
    [89]
    C. Gkartzonikas and K. Gkritza, “What have we learned? A review of stated preference and choice studies on autonomous vehicles,” Transportation Research Part C: Emerging Technologies, vol. 98, pp. 323–337, 2019. doi: 10.1016/j.trc.2018.12.003
    [90]
    C. Huang, C. Lv, P. Hang, and Y. Xing, “Toward safe and personalized autonomous driving: Decision-making and motion control with DPF and CDT techniques,” IEEE/ASME Trans. Mechatronics, vol. 26, no. 2, pp. 611–620, 2021. doi: 10.1109/TMECH.2021.3053248
    [91]
    Z. Deng, D. Chu, C. Wu, S. Liu, C. Sun, T. Liu, and D. Cao, “A probabilistic model for driving-style-recognition-enabled driver steering behaviors,” IEEE Trans. Systems,Man,and Cybernetics: Systems, vol. 52, no. 3, pp. 1838–1851, 2022. doi: 10.1109/TSMC.2020.3037229
    [92]
    B. Zhu, Y. Jiang, J. Zhao, R. He, N. Bian, and W. Deng, “Typical-driving-style-oriented personalized adaptive cruise control design based on human driving data,” Transportation Research Part C: Emerging Technologies, vol. 100, pp. 274–288, 2019. doi: 10.1016/j.trc.2019.01.025
    [93]
    I. Bae, J. Moon, J. Jhung, H. Suk, T. Kim, H. Park, J. Cha, J. Kim, D. Kim, and S. Kim, “Self-driving like a human driver instead of a robocar: Personalized comfortable driving experience for autonomous vehicles,” 2020. [Online]. Available: https://arxiv.org/abs/2001.03908.
    [94]
    L. Oliveira, K. Proctor, C. G. Burns, and S. Birrell, “Driving style: How should an automated vehicle behave?” Information, vol. 10, no. 6, p. 219, 2019. doi: 10.3390/info10060219
    [95]
    T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol. 267, pp. 1–38, 2019. doi: 10.1016/j.artint.2018.07.007
    [96]
    E. Rehder, J. Quehl, and C. Stiller, “Driving like a human: Imitation learning for path planning using convolutional neural networks,” in Proc. Int. Conf. Robotics and Automation Workshops, 2017.
    [97]
    S. Chen, Z. Jian, Y. Huang, Y. Chen, Z. Zhou, and N. Zheng, “Autonomous driving: Cognitive construction and situation understanding,” Science China Information Sciences, vol. 62, no. 8, pp. 1–27, 2019.
    [98]
    Y. Chen, S. Chen, T. Zhang, S. Zhang, and N. Zheng, “Autonomous vehicle testing and validation platform: Integrated simulation system with hardware in the loop,” in Proc. IEEE Intelligent Vehicles Symp., 2018, pp. 949–956.
    [99]
    D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., “Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016. doi: 10.1038/nature16961
    [100]
    D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al., “Mastering the game of go without human knowledge,” Nature, vol. 550, no. 7676, pp. 354–359, 2017. doi: 10.1038/nature24270
    [101]
    D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al., “A general reinforcement learning algorithm that masters chess, shogi, and go through self-play,” Science, vol. 362, no. 6419, pp. 1140–1144, 2018. doi: 10.1126/science.aar6404
    [102]
    K. Shao, Z. Tang, Y. Zhu, N. Li, and D. Zhao, “A survey of deep reinforcement learning in video games,” arXiv preprint arXiv: 1912.10944, 2019.
    [103]
    D. Ye, Z. Liu, M. Sun, B. Shi, P. Zhao, H. Wu, H. Yu, S. Yang, X. Wu, Q. Guo, et al., “Mastering complex control in moba games with deep reinforcement learning,” in Proc. AAAI Conf. Artificial Intelligence, 2020, vol. 34, no. 4, pp. 6672–6679.
    [104]
    E. Alonso, M. Peter, D. Goumard, and J. Romoff, “Deep reinforcement learning for navigation in AAA video games,” arXiv preprint arXiv: 2011.04764, 2020.
    [105]
    J. Wang, Y. Wang, D. Zhang, Y. Yang, and R. Xiong, “Learning hierarchical behavior and motion planning for autonomous driving,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2020, pp. 2235–2242.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(3)

    Article Metrics

    Article views (1143) PDF downloads(310) Cited by()

    Highlights

    • In recent years, there have been many works on autonomous driving decision-making. However, there hasn't been a comprehensive review from a "human-like" perspective. This paper is the first review that provides a comprehensive overview of human-like decision-making for autonomous vehicles
    • In this paper, some original issues are discussed: 1) The intelligence level of most autonomous driving decision-making algorithms; 2) The driving datasets and simulation platforms for testing and verifying human-like decision-making; 3) The evaluation metrics of human-likeness; personalized driving; the application of decision-making in real traffic scenarios; and 4) The potential research direction of human-like driving
    • It is very important to improve the ability of the decision-making system to build a reasonable driving model so that the autonomous vehicle can learn human expert knowledge and driving habits. The research results of this paper are significant for creating interpretable human-like driving models and applying them in dynamic traffic scenarios

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return