A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 7 Issue 3
Apr.  2020

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Guangyuan Pan, Liping Fu, Qili Chen, Ming Yu and Matthew Muresan, "Road Safety Performance Function Analysis With Visual Feature Importance of Deep Neural Nets," IEEE/CAA J. Autom. Sinica, vol. 7, no. 3, pp. 735-744, May 2020. doi: 10.1109/JAS.2020.1003108
Citation: Guangyuan Pan, Liping Fu, Qili Chen, Ming Yu and Matthew Muresan, "Road Safety Performance Function Analysis With Visual Feature Importance of Deep Neural Nets," IEEE/CAA J. Autom. Sinica, vol. 7, no. 3, pp. 735-744, May 2020. doi: 10.1109/JAS.2020.1003108

Road Safety Performance Function Analysis With Visual Feature Importance of Deep Neural Nets

doi: 10.1109/JAS.2020.1003108
Funds:  This work was supported by the National Science and Engineering Research Council of Canada (NSERC), Ontario Research Fund – Research Excellence (ORF-RE), the Ministry of Transportation Ontario (MTO) through Its Highway Infrastructure Innovation Funding Program (HIIFP), Beijing Postdoctoral Science Foundation (ZZ-2019-65), Beijing Chaoyang District Postdoctoral Science Foundation (2019ZZ-45), and Beijing Municipal Education Commission (KM201811232016)
More Information
  • Road safety performance function (SPF) analysis using data-driven and nonparametric methods, especially recent developed deep learning approaches, has gained increasing achievements. However, due to the learning mechanisms are hidden in a “black box” in deep learning, traffic features extraction and intelligent importance analysis are still unsolved and hard to generate. This paper focuses on this problem using a deciphered version of deep neural networks (DNN), one of the most popular deep learning models. This approach builds on visualization, feature importance and sensitivity analysis, can evaluate the contributions of input variables on model’s “black box” feature learning process and output decision. Firstly, a visual feature importance (ViFI) method that describes the importance of input features is proposed by adopting diagram and numerical-analysis. Secondly, by observing the change of weights using ViFI on unsupervised training and fine-tuning of DNN, the final contributions of input features are calculated according to importance equations for both steps that we proposed. Sequentially, a case study based on a road SPF analysis is demonstrated, using data collected from a major Canadian highway, Highway 401. The proposed method allows effective deciphering of the model’s inner workings and allows the significant features to be identified and the bad features to be eliminated. Finally, the revised dataset is used in crash modeling and vehicle collision prediction, and the testing result verifies that the deciphered and revised model achieves state-of-the-art performance.

     

  • loading
  • [1]
    U.S. Department of Transportation, Federal Highway Administration Research and Technology, [Online]. Available: https://www.fhwa.dot.gov/research/publications/technical, 2020.
    [2]
    U.S. Department of Transportation, Highway Safety Manual (HSM), American Association of State Highway and Transportation Officials (AASHTO), Washington, DC, USA, 2010.
    [3]
    J. Aguero-Valverde and P. P. Jovanis, “Analysis of road crash frequency with spatial models,” J. Transportation Research Board, vol. 2061, no. 1, pp. 55–63, Jan. 2008. doi: 10.3141/2061-07
    [4]
    V. Shankar, F. Mannering, and W. Barfield, “Effect of roadway geometrics and environmental factors on rural freeway accident frequencies,” Accident Analysis &Prevention, vol. 27, no. 3, pp. 371–389, 1995.
    [5]
    R. D. Connors, M. Maher, A. Wood, and L. Mountain, “Methodology for fitting and updating predictive accident models with trend,” Accident Analysis and Prevention, vol. 56, pp. 82–94, Jul. 2013. doi: 10.1016/j.aap.2013.03.009
    [6]
    S. P. Miaou and D. Lord, “Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods,” J. Transportation Research Board, vol. 1840, pp. 31–40, Jan. 2003. doi: 10.3141/1840-04
    [7]
    M. Abdel-Aty and K. Haleem, “Analyzing angle crashes at unsignalized intersections using machine learning techniques,” Accident Analysis &Prevention, vol. 43, no. 1, pp. 461–470, Jan. 2011.
    [8]
    L. Y. Chang, “Analysis of freeway accident frequencies: negative binomial regression versus artificial neural network,” Safety Science, vol. 43, no. 8, pp. 541–557, 2005. doi: 10.1016/j.ssci.2005.04.004
    [9]
    L. Thakali, L. S. Fu, and T. Chen, “Model-based versus data-driven approach for road safety analysis: do more data help?” Transportation Research Board 95th Annual Meeting, pp. 3516–3531, 2016.
    [10]
    I. Goodfellow, B. Yoshua, and C. Aaron, Deep Learning, Cambridge: MIT Press, vol. 1. 2016.
    [11]
    G. Hinton, S. Osindero, and Y. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, Jul. 2006. doi: 10.1162/neco.2006.18.7.1527
    [12]
    G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, Jul. 2006. doi: 10.1126/science.1127647
    [13]
    Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015. doi: 10.1038/nature14539
    [14]
    K. He and J. Sun, “Convolutional neural networks at constrained time cost,” in Proc. Conf. on Computer Vision and Pattern Recognition, pp. 5353–5360, Jun. 2015.
    [15]
    P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “OverFeat: integrated recognition, localization and detection using convolutional networks,” in Proc. Int. Conf. on Learning Representations, Feb. 2014.
    [16]
    A. G. Howard, “Some improvements on deep convolutional neural network based image classification,” in Proc. Int. Conf. on Learning Representations, Oct. 2014.
    [17]
    Z. Chen-Mccaig, R. Hoseinnezhad, and A. Hadiasha, “Convolutional neural networks for texture recognition using transfer learning,” in Proc. IEEE Int. Conf. on Control, Autom. and Information Sciences, pp. 187–192, Oct. 2017.
    [18]
    F. C. Soon, H. Y. Khaw, J. H. Chuah, and J. Kanesan, “Hyper-parameters optimization of deep CNN architecture for vehicle logo recognition,” IET Intelligent Transport Systems, vol. 12, no. 8, pp. 939–946, Jul. 2018. doi: 10.1049/iet-its.2018.5127
    [19]
    G. Pan, L. Fu, L. Thakali, M. Muresan, and M. Yu. “An improved deep belief network model for road safety analyses,” in Proc. 97th Transportation Research Board Annual Meeting, no. 18-00835, Dec. 2018.
    [20]
    G. Y. Pan, L. P. Fu, and L. Thakali, “Development of a global road safety performance function using deep neural networks,” Int. J. Transportation Science &Technology, vol. 6, no. 3, pp. 159–173, Sep. 2017.
    [21]
    A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 427–436, Apr. 2015.
    [22]
    I. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv: 1412.6572, Mar. 2015.
    [23]
    N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against deep learning systems using adversarial examples,” arXiv preprint arXiv: 1602.02697, Mar. 2017.
    [24]
    N. Narodytska and S. P. Kasiviswanathan, “Simple black-box adversarial attacks on deep neural networks,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition Workshops, pp. 6–14, Jul. 2017.
    [25]
    R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Computing Surveys (CSUR), vol. 51, no. 5, pp. article. 93, Feb. 2018.
    [26]
    W. Samek, A. Binder, G. Montavon, S. Lapuschkin, and K. R. Müller, “Evaluating the visualization of what a deep neural network has learned,” IEEE Trans. Neural Networks and Learning Systems, vol. 28, no. 11, pp. 2660–2673, Sep. 2015.
    [27]
    A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proc. Int. Conf. on Artificial Intelligence and Statistics, pp. 215–223, 2011.
    [28]
    R. Shwartz-Ziv and N. Tishby, “Opening the black box of deep neural networks via information,” arXiv preprint arXiv: 1703.00810, Apr. 2017.
    [29]
    P. W. Koh and P. Liang, “Understanding black-box predictions via influence functions,” in Proc. 34th Int. Conf. on Machine Learning, pp. 1885–1894, Mar. 2017.
    [30]
    J. Thiagarajan, B. Kailkhura, P. Sattigeri, and K. Ramamurthy, “Tree-View: peeking into deep neural networks via feature-space partitioning,” arXiv preprint arXiv: 1611.07429, Nov. 2016.
    [31]
    Y. Lee, A. Scolari, B. Chun, M. D. Santambrogio, M. Weimer, and M. Interlandi, “PRETZEL: opening the black box of machine learning prediction serving systems”, in Proc. 13th USENIX Symposium on Operating Systems Design and Implementation, pp. 611–626, Oct. 2018.
    [32]
    M. Honegger, “Shedding light on black box machine learning algorithms: development of an axiomatic framework to assess the auality of aethods that explain individual predictions”, arXiv preprint arXiv: 1808.05054, Aug. 2018.
    [33]
    D. Castelvecchi, “Can we open the black box of AI?” Nature, vol. 538, no. 7623, pp. 20–23, Oct. 2016. doi: 10.1038/538020a
    [34]
    P. Voosen, “How AI detectives are cracking open the black box of deep learning,” Science, vol. 357, no. 6346, pp. 22–28, 2017. doi: 10.1126/science.357.6346.22
    [35]
    A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important features through propagating activation differences”, arXiv preprint arXiv: 1704.02685, Oct. 2019.
    [36]
    M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Proc. European Conf. on Computer Vision, pp. 818–833, Nov. 2014.
    [37]
    M. Shang, X. Luo, Z. Liu, J. Chen, Y. Yuan, and M. Zhou, “Randomized latent factor model for high-dimensional and sparse matrices from industrial applications,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 1, pp. 131–141, Jan. 2019. doi: 10.1109/JAS.2018.7511189
    [38]
    X. Luo, M. Zhou, S. Li, and M. Shang, “An inherently nonnegative latent factor model for high-dimensional and sparse matrices from industrial applications,” IEEE Trans. Industrial Informatics, vol. 14, no. 5, pp. 2011–2022, Oct. 2017.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(3)

    Article Metrics

    Article views (1333) PDF downloads(71) Cited by()

    Highlights

    • This is the first to study Explainable AI in both unsupervised learning and supervised learning, and feature importance equations are proposed for both stages. We demonstrate a diagram and numerical-analysis based method, called visual feature importance (ViFI), to understand the black box feature learning process.
    • Two popular techniques, namely visualization and sensitive analysis, are combined to optimize the input and assist to decide model’s structure. This method intuitively highlights which area responds positively or negatively to the inputs. Through this method, it highlights how a DBN model, especially in unsupervised learning, studies differently from other methods.
    • Explainable AI is applied in traffic engineering for the first time. Specifically, ViFI method is applied as a tool to describe the feature importance and establish a more reasonable road safety performance function, achieving state-of-the-art accuracy on road safety analysis.

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return