[1] F.-Y. Wang, N.-N. Zheng, D. Cao, C. M. Martinez, L. Li, and T. Liu, “Parallel driving in cpss: a unified approach for transport automation and vehicle intelligence,” IEEE/CAA J. Autom. Sinica, vol. 4, no. 4, pp. 577–587, 2017. doi: 10.1109/JAS.2017.7510598
[2] C. Lv, D. Cao, Y. Zhao, D. J. Auger, M. Sullman, H. Wang, L. M. Dutka, L. Skrypchuk, and A. Mouzakitis, “Analysis of autopilot disengagements occurring during autonomous vehicle testing,” IEEE/CAA J. Autom. Sinica, vol. 5, no. 1, pp. 58–68, 2017.
[3] Y. Xing, C. Lv, L. Chen, H. Wang, H. Wang, D. Cao, E. Velenis, and F.-Y. Wang, “Advances in vision-based lane detection: algorithms, integration, assessment, and perspectives on acp-based parallel vision,” IEEE/CAA J. Autom. Sinica, vol. 5, no. 3, pp. 645–661, 2018. doi: 10.1109/JAS.2018.7511063
[4] B. Jähne and H. Haußecker, Computer Vision and Applications: A Guide for Students and Practitioners, Elsevier, 2000.
[5] S. J. Russell and P. Norvig, Artificial Intelligence: A Modern Approach. Malaysia; Pearson Education Limited, 2016.
[6] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: the kitti dataset,” The Int. J. Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013. doi: 10.1177/0278364913491297
[7] Z. Chen, J. Zhang, and D. Tao, “Progressive Lidar adaptation for road detection,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 3, pp. 693–702, 2019. doi: 10.1109/JAS.2019.1911459
[8] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “Deepdriving: learning affordance for direct perception in autonomous driving,” in Proc. IEEE Int. Conf. Computer Vision, 2015, pp. 2722–2730.
[9] H. Guo, D. Cao, H. Chen, C. Lv, H. Wang, and S. Yang, “Vehicle dynamic state estimation: state of the art schemes and perspectives,” IEEE/CAA J. Autom. Sinica, vol. 5, no. 2, pp. 418–431, 2018. doi: 10.1109/JAS.2017.7510811
[10] V. Rausch, A. Hansen, E. Solowjow, C. Liu, E. Kreuzer, and J. K. Hedrick, “Learning a deep neural net policy for end-to-end control of autonomous vehicles,” in Proc. IEEE American Control Conf., 2017, pp. 4914–4919.
[11] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: an open urban driving simulator,” in Proc. 1st Annual Conf. Robot Learning, 2017, pp. 1–16.
[12] S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: high-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics. Springer, 2018, pp. 621–635.
[13] S. Kato, S. Tokunaga, Y. Maruyama, S. Maeda, M. Hirabayashi, Y. Kitsukawa, A. Monrroy, T. Ando, Y. Fujii, and T. Azumi, “Autoware on board: enabling autonomous vehicles with embedded systems,” in Proc. ACM/IEEE 9th Int. Conf. Cyber-Physical Systems, 2018, pp. 287–296.
[14] H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 3530–3538.
[15] A. Seff and J. Xiao, “Learning from maps: visual common sense for autonomous driving,” arXiv Preprint arXiv: 1611.08583, 2016.
[16] M. Haklay and P. Weber, “Open street map: user-generated street maps,” IEEE Pervas Comput., vol. 7, no. 4, pp. 12–18, 2008. doi: 10.1109/MPRV.2008.80
[17] F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell, “BDD100K: a diverse driving video database with scalable annotation tooling,” arXiv Preprint arXiv: 1805.04687, 2018.
[18] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 3213–3223.
[19] S. Lee, J. Kim, J. Shin Yoon, S. Shin, O. Bailo, N. Kim, T.-H. Lee, H. Seok Hong, S.-H. Han, and I. So Kweon, “Vpgnet: vanishing point guided network for lane and road marking detection and recognition,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 1947–1955.
[20] G. Li, Y. Yang, and X. Qu, “Deep learning approaches on pedestrian detection in hazy weather,” IEEE Trans. Industrial Electronics, 2019. [Online]. Avaliable: https://ieeexplore.ieee.org/document/8880634/
[21] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang, “The apolloscape dataset for autonomous driving,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, 2018, pp. 954–960.
[22] A. Sauer, N. Savinov, and A. Geiger, “Conditional affordance learning for driving in urban environments,” arXiv Preprint arXiv: 1806.06498, 2018.
[23] C. Sun, J. M. U. Vianney, and D. Cao, “Affordance learning in direct perception for autonomous driving,” arXiv Preprint arXiv: 1903.08746, 2019.
[24] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh, “VQA: visual question answering,” in Proc. IEEE Int. Conf. Computer Vision, 2015, pp. 2425–2433.
[25] S. Ruder, “An overview of multi-task learning in deep neural networks,” arXiv Preprint arXiv: 1706.05098, 2017.
[26] S. Rezaei and R. Sengupta, “Kalman filter-based integration of dgps and vehicle sensors for localization,” IEEE Trans. Control Systems Technology, vol. 15, no. 6, pp. 1080–1088, 2007. doi: 10.1109/TCST.2006.886439
[27] S. E. Li, G. Li, J. Yu, C. Liu, B. Cheng, J. Wang, and K. Li, “Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles,” Mechanical Systems and Signal Processing, vol. 98, pp. 173–189, 2018. doi: 10.1016/j.ymssp.2017.04.041
[28] S. Gao, Y. Hou, H. Dong, S. Stichel, and B. Ning, “High-speed trains automatic operation with protection constraints: a resilient nonlinear gainbased feedback control approach,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 4, pp. 992–999, 2019. doi: 10.1109/JAS.2019.1911582
[29] J. Kong, M. Pfeiffer, G. Schildbach, and F. Borrelli, “Kinematic and dynamic vehicle models for autonomous driving control design,” in Proc. IEEE Intelligent Vehicles Symp. 2015, pp. 1094–1099.
[30] K. T. Leung, J. F. Whidborne, D. Purdy, and P. Barber, “Road vehicle state estimation using low-cost GPS/INS,” Mechanical Systems and Signal Processing, vol. 25, no. 6, pp. 1988–2004, 2011. doi: 10.1016/j.ymssp.2010.08.003
[31] H. Schafer, E. Santana, A. Haden, and R. Biasini, “A commute in data: the comma2k19 dataset,” arXiv Preprint arXiv: 1812.057522018, 2018.
[32] S. Miura, L.-T. Hsu, F. Chen, and S. Kamijo, “GPS error correction with pseudorange evaluation using three-dimensional maps,” IEEE Trans. Intelligent Transportation Systems, vol. 16, no. 6, pp. 3104–3115, 2015. doi: 10.1109/TITS.2015.2432122
[33] H. Sairo, D. Akopian, and J. Takala, “Weighted dilution of precision as quality measure in satellite positioning,” IEE Proceedings-Radar,Sonar and Navigation, vol. 150, no. 6, pp. 430–436, 2003. doi: 10.1049/ip-rsn:20031008
[34] K. Czarnecki and R. Salay, “Towards a framework to manage perceptual uncertainty for safe automated driving,” in Proc. Int. Conf. Computer Safety, Reliability, and Security. Springer, 2018, pp. 439–445.
[35] E. Herrera-Viedma, F. Herrera, and F. Chiclana, “A consensus model for multiperson decision making with different preference structures,” IEEE Trans. Systems,Man,and Cybernetics–Part A:Systems and Humans, vol. 32, no. 3, pp. 394–402, 2002. doi: 10.1109/TSMCA.2002.802821
[36] P. M. Kebria, A. Khosravi, S. M. Salaken, and S. Nahavandi, “Deep imitation learning for autonomous vehicles based on convolutional neural networks,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 1, pp. 82–95, 2019.
[37] Z. Wu, C. Shen, and A. Van Den Hengel, “Wider or deeper: revisiting the resnet model for visual recognition,” Pattern Recognition, vol. 90, pp. 119–133, 2019. doi: 10.1016/j.patcog.2019.01.006
[38] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F. F. Li, “Imagenet: a large-scale hierarchical image database,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2009, pp. 248–255.