IEEE/CAA Journal of Automatica Sinica
Citation: | Z. Zhang, Z. Lei, M. Omura, H. Hasegawa, and S. Gao, “Dendritic learning-incorporated vision transformer for image recognition,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 539–541, Feb. 2024. doi: 10.1109/JAS.2023.123978 |
[1] |
S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image segmentation using deep learning: A survey,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3523–3542, 2021.
|
[2] |
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 770–778.
|
[3] |
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 4700–4708.
|
[4] |
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16×16 words: Transformers for image recognition at scale,” in Proc. Int. Conf. Learning Representations, 2021.
|
[5] |
S. Gao, M. Zhou, Y. Wang, J. Cheng, H. Yachi, and J. Wang, “Dendritic neuron model with effective learning algorithms for classification, approximation, and prediction,” IEEE Trans. Neural Networks and Learning Systems, vol. 30, no. 2, pp. 601–614, 2019. doi: 10.1109/TNNLS.2018.2846646
|
[6] |
W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133, 1943. doi: 10.1007/BF02478259
|
[7] |
D. T. Tran, S. Kiranyaz, M. Gabbouj, and A. Iosifidis, “Heterogeneous multilayer generalized operational perceptron,” IEEE Trans. Neural Networks and Learning Systems, vol. 31, no. 3, pp. 710–724, 2019.
|
[8] |
A. Taherkhani, A. Belatreche, Y. Li, G. Cosma, L. P. Maguire, and T. M. McGinnity, “A review of learning in biologically plausible spiking neural networks,” Neural Networks, vol. 122, pp. 253–272, 2020. doi: 10.1016/j.neunet.2019.09.036
|
[9] |
Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2021, pp. 10012–10022.
|
[10] |
T. Xiao, M. Singh, E. Mintun, T. Darrell, P. Dollár, and R. Girshick, “Early convolutions help transformers see better,” Advances in Neural Information Processing Systems, vol. 34, pp. 30392–30400, 2021.
|
[11] |
H. Touvron, M. Cord, A. Sablayrolles, G. Synnaeve, and H. Jégou, “Going deeper with image transformers,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2021, pp. 32–42.
|
[12] |
C. Koch, T. Poggio, and V. Torre, “Retinal ganglion cells: A functional interpretation of dendritic morphology,” Philosophical Transa. Royal Society of London, vol. 298, no. 1090, pp. 227–263, 1982.
|
[13] |
Y. Yu, Z. Lei, Y. Wang, T. Zhang, C. Peng, and S. Gao, “Improving dendritic neuron model with dynamic scale-free network-based differential evolution,” IEEE/CAA J. Automa. Sinica, vol. 9, no. 1, pp. 99–110, 2021.
|
[14] |
H. He, S. Gao, T. Jin, S. Sato, and X. Zhang, “A seasonal-trend decomposition-based dendritic neuron model for financial time series prediction,” Applied Soft Computing, vol. 108, p. 107488, 2021. doi: 10.1016/j.asoc.2021.107488
|
[15] |
S. Lee, S. Lee, and B. Song, “Improving vision transformers to learn small-size dataset from scratch,” IEEE Access, vol. 10, p. 123, 2022.
|
[16] |
R. Yuste, “Dendritic spines and distributed circuits,” Neuron, vol. 71, no. 5, pp. 772–781, 2011. doi: 10.1016/j.neuron.2011.07.024
|
[17] |
X. Wu, X. Liu, W. Li, and Q. Wu, “Improved expressivity through dendritic neural networks,” Advances in Neural Information Processing Systems, vol. 31, pp. 8057–8068, 2018.
|
[18] |
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. Learning Representations, 2015.
|