Citation: | Z. Luo, X. Jin, Y. Luo, Q. Zhou, and X. Luo, “Analysis of students’ positive emotion and smile intensity using sequence-relative key-frame labeling and deep-asymmetric convolutional neural network,” IEEE/CAA J. Autom. Sinica.. |
[1] |
B. Berweger, S. Born, and J. Dietrich, “Expectancy-value appraisals and achievement emotions in an online learning environment: Within-and between-person relationships,” Learn. Instr., vol. 77, p. 101546, Feb. 2022. doi: 10.1016/j.learninstruc.2021.101546
|
[2] |
B. L. Fredrickson, “The broaden-and-build theory of positive emotions,” Philos. Trans. Roy. Soc. B: Biol. Sci., vol. 359, no. 1449, pp. 1367–1377, Sep. 2004. doi: 10.1098/rstb.2004.1512
|
[3] |
B. L. Fredrickson, “Positive emotions broaden and build,” Adv. Exp. Soc. Psychol., vol. 47, pp. 1–53, Dec. 2013.
|
[4] |
A. M. Uzun and Z. Yldrm, “Exploring the effect of using different levels of emotional design features in multimedia science learning,” Comput. Educ., vol. 119, pp. 112–128, Apr. 2018. doi: 10.1016/j.compedu.2018.01.002
|
[5] |
A. Rodríguez-Muñoz, M. Antino, P. Ruiz-Zorrilla, and E. Ortega, “Positive emotions, engagement, and objective academic performance: A weekly diary study,” Learn. Individ. Differ., vol. 92, p. 102087, Dec. 2021. doi: 10.1016/j.lindif.2021.102087
|
[6] |
R. Pekrun, S. Lichtenfeld, H. W. Marsh, K. Murayama, and T. Goetz, “Achievement emotions and academic performance: Longitudinal models of reciprocal effects,” Child Dev., vol. 88, no. 5, pp. 1653–1670, Sep.-Oct. 2017. doi: 10.1111/cdev.12704
|
[7] |
M. Dindar, S. Järvelä, S. Ahola, X. Huang, and G. Zhao, “Leaders and followers identified by emotional mimicry during collaborative learning: A facial expression recognition study on emotional valence,” IEEE Trans. Affect. Comput., vol. 13, no. 3, pp. 1390–1400, Jul.-Sep. 2022. doi: 10.1109/TAFFC.2020.3003243
|
[8] |
X. Luo, Z. Li, W. Yue, and S. Li, “A calibrator fuzzy ensemble for highly-accurate robot arm calibration,” EEE Trans. Neural Netw. Learn. Syst., vol. 36, no. 2, pp. 2169–2181, Feb. 2025. doi: 10.1109/TNNLS.2024.3354080
|
[9] |
Z. Ambadar, J. W. Schooler, and J. F. Cohn, “Deciphering the enigmatic face: The importance of facial dynamics in interpreting subtle facial expressions,” Psychol. Sci., vol. 16, no. 5, pp. 403–410, May 2005. doi: 10.1111/j.0956-7976.2005.01548.x
|
[10] |
J. Chen, C. Guo, R. Xu, K. Zhang, Z. Yang, and H. Liu, “Toward children’s empathy ability analysis: Joint facial expression recognition and intensity estimation using label distribution learning,” IEEE Trans. Ind. Inf., vol. 18, no. 1, pp. 16–25, Jan. 2022. doi: 10.1109/TII.2021.3075989
|
[11] |
R. Zhao, Q. Gan, S. Wang, and Q. Ji, “Facial expression intensity estimation using ordinal information,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 3466–3474.
|
[12] |
O. Ekundayo and S. Viriri, “Facial expression recognition and ordinal intensity estimation: A multilabel learning approach,” in Proc. 15th Int. Conf. Advances in Visual Computing, San Diego, USA, 2020, pp. 581–592.
|
[13] |
K. Yang, C. Wang, Y. Gu, Z. Sarsenbayeva, B. Tag, T. Dingler, G. Wadley, and J. Goncalves, “Behavioral and physiological signals-based deep multimodal approach for mobile emotion recognition,” IEEE Trans. Affect. Comput., vol. 14, no. 2, pp. 1082–1097, Apr.-Jun. 2023. doi: 10.1109/TAFFC.2021.3100868
|
[14] |
Y. Wang, S. Qiu, D. Li, C. Du, B.-L. Lu, and H. He, “Multi-modal domain adaptation variational autoencoder for EEG-based emotion recognition,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 9, pp. 1612–1626, Sep. 2022. doi: 10.1109/JAS.2022.105515
|
[15] |
A. V. Savchenko, L. V. Savchenko, and I. Makarov, “Classifying emotions and engagement in online learning based on a single facial expression recognition neural network,” IEEE Trans. Affect. Comput., vol. 13, no. 4, pp. 2132–2143, Oct.-Dec. 2022. doi: 10.1109/TAFFC.2022.3188390
|
[16] |
A. Mollahosseini, B. Hasani, and M. H. Mahoor, “AffectNet: A database for facial expression, valence, and arousal computing in the wild,” IEEE Trans. Affect. Comput., vol. 10, no. 1, pp. 18–31, Jan.-Mar. 2019. doi: 10.1109/TAFFC.2017.2740923
|
[17] |
O. M. Nezami, M. Dras, L. Hamey, D. Richards, S. Wan, and C. Paris, “Automatic recognition of student engagement using deep learning and facial expression,” in Proc. European Conf. Machine Learning and Knowledge Discovery in Databases, Würzburg, Germany, 2020, pp. 273–289.
|
[18] |
T. S. Ashwin and R. M. R. Guddeti, “Unobtrusive behavioral analysis of students in classroom environment using non-verbal cues,” IEEE Access, vol. 7, pp. 150693–150709, Oct. 2019. doi: 10.1109/ACCESS.2019.2947519
|
[19] |
S. Li and W. Deng, “Deep facial expression recognition: A survey,” in Proc. Computer Vision and Pattern Recognition, Los Angeles, USA, 2019, pp. 111–136. (查阅网上资料,未找到本条文献信息,请确认)
|
[20] |
B. M. Waller and M. Smith Pasqualini, “Analysing facial expression using the facial action coding system (FACS),” in Body - Language – Communication: An International Handbook on Multimodality in Human Interaction, C. Muller, A. Cienki, E. Fricke, S. Ladewig, D. McNeill, and S. Tessendorf, Eds. Boston, USA: Mouton de Gruyter, 2013, pp. 125–127.
|
[21] |
S. Wang, L. Hao, and Q. Ji, “Facial action unit recognition and intensity estimation enhanced through label dependencies,” IEEE Trans. Image Process., vol. 28, no. 3, pp. 1428–1442, Mar. 2019. doi: 10.1109/TIP.2018.2878339
|
[22] |
J. M. Girard, J. F. Cohn, and F. De la Torre, “Estimating smile intensity: A better way,” Pattern Recognit. Lett., vol. 66, pp. 13–21, Nov. 2015. doi: 10.1016/j.patrec.2014.10.004
|
[23] |
Q. Wei, E. Bozkurt, L.-P. Morency, and B. Sun, “Spontaneous smile intensity estimation by fusing saliency maps and convolutional neural networks,” J. Electron. Imaging, vol. 28, no. 2, p. 023031, Apr. 2019.
|
[24] |
S. Wang, L. Hao, and Q. Ji, “Facial action unit recognition and intensity estimation enhanced through label dependencies,” IEEE Trans. Image Process., vol. 28, no. 3, pp. 1428–1442, Mar. 2019. (查阅网上资料,本条文献与第21条文献重复,请确认)
|
[25] |
J. C. P. Batista, O. R. P. Bellon, and L. Silva, “Landmark-free smile intensity estimation,” in Proc. 29th Conf. Graphics, Patterns and Images - Workshop on Face Processing Applications: Biometrics and Beyond, São José do rio Preto, Brazil, 2016.
|
[26] |
Q. Wei, “Saliency maps-based convolutional neural networks for facial expression recognition,” IEEE Access, vol. 9, pp. 76224–76234, May 2021. doi: 10.1109/ACCESS.2021.3082694
|
[27] |
S. Du, Y. Tao, and A. M. Martinez, “Compound facial expressions of emotion,” Proc. Natl. Acad. Sci. USA, vol. 111, no. 15, pp. E1454–E1462, Apr. 2014.
|
[28] |
S. Wang, B. Pan, S. Wu, and Q. Ji, “Deep facial action unit recognition and intensity estimation from partially labelled data,” IEEE Trans. Affect. Comput., vol. 12, no. 4, pp. 1018–1030, Oct.-Dec. 2021. doi: 10.1109/TAFFC.2019.2914654
|
[29] |
C. Quan, Y. Qian, and F. Ren, “Dynamic facial expression recognition based on K-order emotional intensity model,” in Proc. IEEE Int. Conf. Robot and Biomimetics, Bali, Indonesia, 2014, pp. 1164–1168.
|
[30] |
K.-Y. Chang, C.-S. Chen, and Y.-P. Hung, “Intensity rank estimation of facial expressions based on a single image,” in Proc. IEEE Int. Conf. System, Man, Cybernetics, Manchester, UK, 2013, pp. 3152–3157.
|
[31] |
K. Shimada, Y. Noguchi, and T. Kurita, “Fast and robust smile intensity estimation by cascaded support vector machines,” Int. J. Comput. Theory Eng., vol. 5, no. 1, pp. 24–30, Feb. 2013.
|
[32] |
H. Gunes and M. Piccardi, “Automatic temporal segment detection and affect recognition from face and body display,” IEEE Trans. Syst., Man, Cybern., Part B (Cybern., vol. 39, no. 1, pp. 64–84, Feb. 2009. doi: 10.1109/TSMCB.2008.927269
|
[33] |
S. Koelstra, M. Pantic, and I. Patras, “A dynamic texture-based approach to recognition of facial actions and their temporal models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 11, pp. 1940–1954, Nov. 2010. doi: 10.1109/TPAMI.2010.50
|
[34] |
Y. Ren, J. Hu, and W. Deng, “Facial expression intensity estimation based on CNN features and RankBoost,” in Proc. 4th IAPR Asian Conf. Pattern Recognition, Nanjing, China, 2017, pp. 488–493.
|
[35] |
Y. Li, X. Huang, and G. Zhao, “Joint local and global information learning with single apex frame detection for micro-expression recognition,” IEEE Trans. Image Process., vol. 30, pp. 249–263, Nov. 2021. doi: 10.1109/TIP.2020.3035042
|
[36] |
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition - Workshops, San Francisco, USA, 2010, pp. 94–101.
|
[37] |
G. Zhao, X. Huang, M. Taini, S. Z. Li, and M. Pietikäinen, “Facial expression recognition from near-infrared videos,” Image Vision Comput., vol. 29, no. 9, pp. 607–619, Aug. 2011. doi: 10.1016/j.imavis.2011.07.002
|
[38] |
L. Yin, X. Chen, Y. Sun, T. Worm, and M. Reale, “A high-resolution 3D dynamic facial expression database,” in Proc. 8th IEEE Int. Conf. Automatic Face & Gesture Recognition, Amsterdam, The Netherlands, 2008, pp. 1–6.
|
[39] |
M. S. Bartlett, G. Littlewort, B. Braathen, T. J. Sejnowski, and J. R. Movellan, “A prototype for automatic recognition of spontaneous facial actions,” in Proc. 16th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2002.
|
[40] |
J. Whitehill, G. Littlewort, I. Fasel, M. Bartlett, and J. Movellan, “Toward practical smile detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 11, pp. 2106–2111, Nov. 2009. doi: 10.1109/TPAMI.2009.42
|
[41] |
A. Savran, B. Sankur, and M. Taha Bilge, “Regression-based intensity estimation of facial action units,” Image Vision Comput., vol. 30, no. 10, pp. 774–784, Oct. 2012. doi: 10.1016/j.imavis.2011.11.008
|
[42] |
S. Yang, D. Zhou, J. Cao, and Y. Guo, “Rethinking low-light enhancement via transformer-GAN,” IEEE Signal Process. Lett., vol. 29, pp. 1082–1086, Apr. 2022. doi: 10.1109/LSP.2022.3167331
|
[43] |
S. Yang, D. Zhou, J. Cao, and Y. Guo, “LightingNet: An integrated learning method for low-light image enhancement,” IEEE Trans. Comput. Imaging, vol. 9, pp. 29–42, Jan. 2023. doi: 10.1109/TCI.2023.3240087
|
[44] |
N. Zeng, P. Wu, Y. Zhang, H. Li, J. Mao, and Z. Wang, “DPMSN: A dual-pathway multiscale network for image forgery detection,” IEEE Trans. Ind. Inf., vol. 20, no. 5, pp. 7665–7674, May 2024. doi: 10.1109/TII.2024.3359454
|
[45] |
W. Li, Y. Guo, B. Wang, and B. Yang, “Learning spatiotemporal embedding with gated convolutional recurrent networks for translation initiation site prediction,” Pattern Recognit., vol. 136, p. 109234, Apr. 2023. doi: 10.1016/j.patcog.2022.109234
|
[46] |
Y. Guo, D. Zhou, P. Li, C. Li, and J. Cao, “Context-aware poly(A) signal prediction model via deep spatial-temporal neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 35, no. 6, pp. 8241–8253, Jun. 2024. doi: 10.1109/TNNLS.2022.3226301
|
[47] |
I. Ntinou, E. Sanchez, A. Bulat, M. Valstar, and G. Tzimiropoulos, “A transfer learning approach to heatmap regression for action unit intensity estimation,” IEEE Trans. Affect. Comput., vol. 14, no. 1, pp. 436–450, Jan.-Mar. 2023. doi: 10.1109/TAFFC.2021.3061605
|
[48] |
T. Song, Z. Cui, Y. Wang, W. Zheng, and Q. Ji, “Dynamic probabilistic graph convolution for facial action unit intensity estimation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Nashville, USA, 2021, pp. 4843–4852.
|
[49] |
M. Kim and V. Pavlovic, “Hidden conditional ordinal random fields for sequence classification,” in Proc. European Conf. Machine Learning and Knowledge Discovery in Databases, Barcelona, Spain, 2010, pp. 51–65.
|
[50] |
S. K. A. Kamarol, M. H. Jaward, H. Kälviäinen, J. Parkkinen, and R. Parthiban, “Joint facial expression recognition and intensity estimation based on weighted votes of image sequences,” Pattern Recognit. Lett., vol. 92, pp. 25–32, Jun. 2017. doi: 10.1016/j.patrec.2017.04.003
|
[51] |
F. Bi, T. He, Y. Xie, and X. Luo, “Two-stream graph convolutional network-incorporated latent feature analysis,” IEEE Trans. Serv. Comput., vol. 16, no. 4, pp. 3027–3042, Junl-Aug. 2023. doi: 10.1109/TSC.2023.3241659
|
[52] |
D. Wu, X. Luo, Y. He, and M. Zhou, “A prediction-sampling-based multilayer-structured latent factor model for accurate representation to high-dimensional and sparse data,” IEEE Trans. Neural Netw. Learn. Syst., vol. 35, no. 3, pp. 3845–3858, Mar. 2024. doi: 10.1109/TNNLS.2022.3200009
|
[53] |
M. Sabri and T. Kurita, “Facial expression intensity estimation using Siamese and triplet networks,” Neurocomputing, vol. 313, pp. 143–154, Nov. 2018. doi: 10.1016/j.neucom.2018.06.054
|
[54] |
S. Wang, B. Pan, S. Wu, and Q. Ji, “Deep facial action unit recognition and intensity estimation from partially labelled data,” IEEE Trans. Affect. Comput., vol. 12, no. 4, pp. 1018–1030, Oct.–Dec. 2021. (查阅网上资料,本条文献与第28条文献重复,请确认)
|
[55] |
P. Yang, Q. Liu, and D. N. Metaxas, “RankBoost with l1 regularization for facial expression recognition and intensity estimation,” in Proc. IEEE 12th Int. Conf. Computer Vision, Kyoto, Japan, 2009, pp. 1018–1025.
|
[56] |
C.-T. Liao, H.-J. Chuang, and S.-H. Lai, “Learning expression kernels for facial expression intensity estimation,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, Kyoto, Japan, 2012, pp. 2217–2220.
|
[57] |
T.-R. Huang, S.-M. Hsu, and L.-C. Fu, “Data augmentation via face morphing for recognizing intensities of facial emotions,” IEEE Trans. Affect. Comput., vol. 14, no. 2, pp. 1228–1235, Apr-Jun. 2023. doi: 10.1109/TAFFC.2021.3096922
|
[58] |
C.-H. Chang, P.-C. Lin, T.-W. Kuan, J.-M. Chen, Y.-C. Lin, J.-F. Wang, and A.-C. Tsai, “Multi-level smile intensity measuring based on mouth-corner features for happiness detection,” in Proc. Int. Conf. Orange Technologies, Xi’an, China, 2014, pp. 181–184.
|
[59] |
P. Wu, H. Liu, C. Xu, Y. Gao, Z. Li, and X. Zhang, “How do you smile? Towards a comprehensive smile analysis system,” Neurocomputing, vol. 235, pp. 245–254, Apr. 2017. doi: 10.1016/j.neucom.2017.01.020
|
[60] |
O. S. Ekundayo and S. Viriri, “Facial expression recognition: A review of trends and techniques,” IEEE Access, vol. 9, pp. 136944–136973, Sep. 2021. doi: 10.1109/ACCESS.2021.3113464
|
[61] |
S. B. Wang, A. Quattoni, L.-P. Morency, D. Demirdjian, and T. Darrell, “Hidden conditional random fields for gesture recognition,” in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, New York, NY, USA, 2006, pp. 1521–1527.
|
[62] |
J. Chen, Y. Yuan, and X. Luo, “SDGNN: Symmetry-preserving dual-stream graph neural networks,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 7, pp. 1717–1719, Jul. 2024. doi: 10.1109/JAS.2024.124410
|
[63] |
W. Zhang, J. Wang, and F. Lan, “Dynamic hand gesture recognition based on short-term sampling neural networks,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 1, pp. 110–120, Jan. 2021. doi: 10.1109/JAS.2020.1003465
|
[64] |
L. Hu, Y. Yang, Z. Tang, Y. He, and X. Luo, “FCAN-MOPSO: An improved fuzzy-based graph clustering algorithm for complex networks with multiobjective particle swarm optimization,” IEEE Trans. Fuzzy Syst., vol. 31, no. 10, pp. 3470–3484, Oct. 2023. doi: 10.1109/TFUZZ.2023.3259726
|
[65] |
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. 3rd Int. Conf. Learning Representations, San Diego, CA, USA, 2015.
|
[66] |
S. A. Bargal, E. Barsoum, C. C. Ferrer, and C. Zhang, “Emotion recognition in the wild from videos using images,” in Proc. 18th Int. Conf. Multimodal Interaction, Tokyo, Japan, 2016, pp. 433–436.
|
[67] |
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, “ImageNet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 248–256.
|
[68] |
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. 32nd Int. Conf. Machine Learning, Lille, France, 2015, pp. 448–456.
|
[69] |
Y. Zhang, B. Xu, and T. Zhao, “Convolutional multi-head self-attention on memory for aspect sentiment classification,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 4, pp. 1038–1044, Jul. 2020. doi: 10.1109/JAS.2020.1003243
|
[70] |
P. Wu, H. Li, L. Hu, J. Ge, and N. Zeng, “A local-global attention fusion framework with tensor decomposition for medical diagnosis,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 6, pp. 1536–1538, Jun. 2024. doi: 10.1109/JAS.2023.124167
|
[71] |
T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 318–327, Feb. 2020. doi: 10.1109/TPAMI.2018.2858826
|
[72] |
G. H. Bower, S. G. Gilligan, and K. P. Monteiro, “Selectivity of learning caused by affective states,” J. Exp. Psychol. Gen., vol. 110, no. 4, pp. 451–473, Dec. 1981. doi: 10.1037/0096-3445.110.4.451
|
[73] |
J. Whitehill, Z. Serpell, Y.-C. Lin, A. Foster, and J. R. Movellan, “The faces of engagement: Automatic recognition of student engagementfrom facial expressions,” IEEE Trans. Affect. Comput., vol. 5, no. 1, pp. 86–98, Jan.-Mar. 2014. doi: 10.1109/TAFFC.2014.2316163
|
[74] |
T. S. Ashwin and R. M. R. Guddeti, “Unobtrusive behavioral analysis of students in classroom environment using non-verbal cues,” IEEE Access, vol. 7, pp. 150693–150709, Oct. 2019. (查阅网上资料,本条文献与第18条文献重复,请确认)
|
[75] |
K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–1503, Oct. 2016. doi: 10.1109/LSP.2016.2603342
|
[76] |
R. Walecki, O. Rudovic, V. Pavlovic, and M. Pantic, “Variable-state latent conditional random fields for facial expression recognition and action unit detection,” in Proc. 11th IEEE Int. Conf. and Workshops on Automatic Face and Gesture Recognition, Ljubljana, Slovenia, 2015, pp. 1–8.
|
[77] |
D. Li, W. Qi, and S. Sun, “Facial landmarks and expression label guided photorealistic facial expression synthesis,” IEEE Access, vol. 9, pp. 56292–56300, Apr. 2021. doi: 10.1109/ACCESS.2021.3072057
|
[78] |
S. Chen, J. Wang, Y. Chen, Z. Shi, X. Geng, and Y. Rui, “Label distribution learning on auxiliary label space graphs for facial expression recognition,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 13981–13990.
|
[79] |
M. K. Abd El Meguid and M. D. Levine, “Fully automated recognition of spontaneous facial expressions in videos using random forest classifiers,” IEEE Trans. Affect. Comput., vol. 5, no. 2, pp. 141–154, Apr.–Jun. 2014. doi: 10.1109/TAFFC.2014.2317711
|
[80] |
A. Dapogny and K. Bailly, “Investigating deep neural forests for facial expression recognition,” in Proc. 13th IEEE Int. Conf. Automatic Face & Gesture Recognition, Xi’an, China, 2018, pp. 629–633.
|
[81] |
K. Zhu, Y. Wang, H. Yang, D. Huang, and L. Chen, “Intensity enhancement via gan for multimodal facial expression recognition,” in Proc. IEEE Int. Conf. Image Processing, Abu Dhabi, United Arab Emirates, 2020, pp. 1346–1350.
|