Citation: | Y. Du, Y. Ma, J. Huang, X. Mei, J. Qin, and F. Fan, “Joint super-resolution and nonuniformity correction model for infrared light field images based on frequency correlation learning,” IEEE/CAA J. Autom. Sinica, 2024. doi: 10.1109/JAS.2024.124881 |
[1] |
Y. Ma, X. Wang, W. Gao, Y. Du, J. Huang, and F. Fan, “Progressive fusion network based on infrared light field equipment for infrared image enhancement,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 9, pp. 1687–1690, 2022. doi: 10.1109/JAS.2022.105812
|
[2] |
G. Liu, H. Yue, J. Wu, and J. Yang, “Intra-inter view interaction network for light field image super-resolution,” IEEE Trans. Multimedia, vol. 25, pp. 256–266, 2021.
|
[3] |
Y. Wang, L. Wang, J. Yang, W. An, J. Yu, and Y. Guo, “Spatial-angular interaction for light field image super-resolution,” in European Conf. on Computer Vision. Springer, 2020, pp. 290–308.
|
[4] |
Y. Ding, Z. Chen, Y. Ji, J. Yu, and J. Ye, “Light field-based underwater 3d reconstruction via angular re-sampling,” IEEE Trans. Computational Imaging, vol. 9, pp. 881–893, 2023. doi: 10.1109/TCI.2023.3319983
|
[5] |
Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Structured-light-field 3d imaging without phase unwrapping,” Optics and Lasers in Engineering, vol. 129, p. 106047, 2020. doi: 10.1016/j.optlaseng.2020.106047
|
[6] |
J. Hur, J. Y. Lee, J. Choi, and J. Kim, “I see-through you: A framework for removing foreground occlusion in both sparse and dense light field images,” in Proc. the IEEE/CVF Winter Conf. on Applications of Computer Vision, 2023, pp. 229–238.
|
[7] |
X. Wang, J. Liu, S. Chen, and G. Wei, “Effective light field de-occlusion network based on swin transformer,” IEEE Trans. Circuits and Systems for Video Technology, vol. 33, no. 6, pp. 2590–2599, 2023. doi: 10.1109/TCSVT.2022.3226227
|
[8] |
W. Yan, X. Zhang, and H. Chen, “Occlusion-aware unsupervised light field depth estimation based on muti-scale gans,” IEEE Trans. Circuits and Systems for Video Technology, pp. 1–1, 2024.
|
[9] |
Z. Cui, H. Sheng, D. Yang, S. Wang, R. Chen, and W. Ke, “Light field depth estimation for non-lambertian objects via adaptive cross operator,” IEEE Trans. Circuits and Systems for Video Technology, vol. 34, no. 2, pp. 1199–1211, 2024. doi: 10.1109/TCSVT.2023.3292884
|
[10] |
A. Glowacz, “Ventilation diagnosis of minigrinders using thermal images,” Expert Systems with Applications, vol. 237, p. 121435, 2024. doi: 10.1016/j.eswa.2023.121435
|
[11] |
A. Glowacz, “Thermographic fault diagnosis of electrical faults of commutator and induction motors,” Engineering Applications of Artificial Intelligence, vol. 121, p. 105962, 2023. doi: 10.1016/j.engappai.2023.105962
|
[12] |
K. Prajapati, V. Chudasama, H. Patel, A. Sarvaiya, K. P. Upla, K. Raja, R. Ramachandra, and C. Busch, “Channel split convolutional neural network (chasnet) for thermal image super-resolution,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021, pp. 4368–4377.
|
[13] |
Y. Huang, Z. Jiang, R. Lan, S. Zhang, and K. Pi, “Infrared image super-resolution via transfer learning and psrgan,” IEEE Signal Processing Letters, vol. 28, pp. 982–986, 2021. doi: 10.1109/LSP.2021.3077801
|
[14] |
L. Sun, Z. Liu, X. Sun, L. Liu, R. Lan, and X. Luo, “Lightweight image super-resolution via weighted multi-scale residual network,” IEEE/CAA Journal of Automatica Sinica, vol. 8, no. 7, pp. 1271–1280, 2021. doi: 10.1109/JAS.2021.1004009
|
[15] |
Y. Huang, T. Miyazaki, X. Liu, and S. Omachi, “Infrared image super-resolution: Systematic review, and future trends,” arXiv preprint arXiv: 2212.12322, 2022.
|
[16] |
Y. Wang, L. Wang, G. Wu, J. Yang, W. An, J. Yu, and Y. Guo, “Disentangling light fields for super-resolution and disparity estimation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 45, no. 1, pp. 425–443, 2022.
|
[17] |
Z. Liang, Y. Wang, L. Wang, J. Yang, S. Zhou, and Y. Guo, “Learning non-local spatial-angular correlation for light field image super-resolution,” in Proc. the IEEE/CVF Int. Conf. on Computer Vision, 2023, pp. 12376–12386.
|
[18] |
R. A. Farrugia, C. Galea, and C. Guillemot, “Super resolution of light field images using linear subspace projection of patch-volumes,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 7, pp. 1058–1071, 2017. doi: 10.1109/JSTSP.2017.2747127
|
[19] |
M. Rossi and P. Frossard, “Geometry-consistent light field super-resolution via graph-based regularization,” IEEE Trans. Image Processing, vol. 27, no. 9, pp. 4207–4218, 2018. doi: 10.1109/TIP.2018.2828983
|
[20] |
V. K. Ghassab and N. Bouguila, “Light field super-resolution using edge-preserved graph-based regularization,” IEEE Trans. Multimedia, vol. 22, no. 6, pp. 1447–1457, 2019.
|
[21] |
K. Jin, A. Yang, Z. Wei, S. Guo, M. Gao, and X. Zhou, “Distgepit: Enhanced disparity learning for light field image super-resolution,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2023, pp. 1373–1383.
|
[22] |
R. Cong, H. Sheng, D. Yang, Z. Cui, and R. Chen, “Exploiting spatial and angular correlations with deep efficient transformers for light field image super-resolution,” IEEE Trans. Multimedia, vol. 26, pp. 1421–1435, 2024. doi: 10.1109/TMM.2023.3282465
|
[23] |
S. Wang, T. Zhou, Y. Lu, and H. Di, “Detail-preserving transformer for light field image super-resolution,” in Proc. the AAAI Conf. on Artificial Intelligence, vol. 36, no. 3, 2022, pp. 2522–2530.
|
[24] |
H. Sheng, S. Wang, D. Yang, R. Cong, Z. Cui, and R. Chen, “Cross-view recurrence-based self-supervised super-resolution of light field,” IEEE Trans. Circuits and Systems for Video Technology, vol. 33, no. 12, pp. 7252–7266, 2023. doi: 10.1109/TCSVT.2023.3278462
|
[25] |
V. Van Duong, T. N. Huu, J. Yim, and B. Jeon, “Light field image super-resolution network via joint spatial-angular and epipolar information,” IEEE Trans. Computational Imaging, vol. 9, pp. 350–366, 2023. doi: 10.1109/TCI.2023.3261501
|
[26] |
H. W. F. Yeung, J. Hou, X. Chen, J. Chen, Z. Chen, and Y. Y. Chung, “Light field spatial super-resolution using deep efficient spatial-angular separable convolution,” IEEE Trans. Image Processing, vol. 28, no. 5, pp. 2319–2330, 2018.
|
[27] |
N. Meng, H. K.-H. So, X. Sun, and E. Y. Lam, “High-dimensional dense residual convolutional neural network for light field reconstruction,” IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 3, pp. 873–886, 2019.
|
[28] |
N. Meng, X. Wu, J. Liu, and E. Lam, “High-order residual network for light field super-resolution,” in Proc. the AAAI Conf. on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 11757–11764.
|
[29] |
S. Zhang, Y. Lin, and H. Sheng, “Residual networks for light field image super-resolution,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2019, pp. 11046–11055.
|
[30] |
J. Jin, J. Hou, J. Chen, and S. Kwong, “Light field spatial super-resolution via deep combinatorial geometry embedding and structural consistency regularization,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2020, pp. 2260–2269.
|
[31] |
Y. Wang, Z. Liang, L. Wang, J. Yang, W. An, and Y. Guo, “Real-world light field image super-resolution via degradation modulation,” IEEE Trans. Neural Networks and Learning Systems, pp. 1–15, 2024.
|
[32] |
Y. Wang, J. Yang, L. Wang, X. Ying, T. Wu, W. An, and Y. Guo, “Light field image super-resolution using deformable convolution,” IEEE Trans. Image Processing, vol. 30, pp. 1057–1071, 2020.
|
[33] |
X. Wang, J. Ma, P. Yi, X. Tian, J. Jiang, and X.-P. Zhang, “Learning an epipolar shift compensation for light field image super-resolution,” Information Fusion, vol. 79, pp. 188–199, 2022. doi: 10.1016/j.inffus.2021.10.005
|
[34] |
Y. He, C. Zhang, B. Zhang, and Z. Chen, “Fspnp: Plug-and-play frequency-spatial-domain hybrid denoiser for thermal infrared image,” IEEE Trans. Geoscience and Remote Sensing, vol. 62, pp. 1–16, 2024.
|
[35] |
R. He, M. Guan, and C. Wen, “Scens: Simultaneous contrast enhancement and noise suppression for low-light images,” IEEE Trans. Industrial Electronics, vol. 68, no. 9, pp. 8687–8697, 2020.
|
[36] |
O. Rukundo and H. Cao, “Nearest neighbor value interpolation,” Int. Journal of Advanced Computer Science and Applications, vol. 3, no. 4, pp. 25–30, 2012.
|
[37] |
T. Li, X. Dong, and H. Chen, “Single image super-resolution incorporating example-based gradient profile estimation and weighted adaptive p-norm,” Neurocomputing, vol. 355, pp. 105–120, 2019. doi: 10.1016/j.neucom.2019.04.051
|
[38] |
Y. Zou, L. Zhang, C. Liu, B. Wang, Y. Hu, and Q. Chen, “Super-resolution reconstruction of infrared images based on a convolutional neural network with skip connections,” Optics and Lasers in Engineering, vol. 146, p. 106717, 2021. doi: 10.1016/j.optlaseng.2021.106717
|
[39] |
Y. Choi, N. Kim, S. Hwang, and I. S. Kweon, “Thermal image enhancement using convolutional neural network,” in 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2016, pp. 223–230.
|
[40] |
R. E. Rivadeneira, P. L. Suárez, A. D. Sappa, and B. X. Vintimilla, “Thermal image superresolution through deep convolutional neural network,” in Int. conference on image analysis and recognition. Springer, 2019, pp. 417–426.
|
[41] |
Y. Cao, L. Li, B. Liu, W. Zhou, Z. Li, and W. Ni, “Cfmb-t: A cross-frequency multi-branch transformer for low-quality infrared remote sensing image super-resolution,” Infrared Physics & Technology, vol. 133, p. 104861, 2023.
|
[42] |
H. Yu, F. sheng Chen, Z. jie Zhang, and C. sheng Wang, “Single infrared image super-resolution combining non-local means with kernel regression,” Infrared Physics & Technology, vol. 61, pp. 50–59, 2013.
|
[43] |
V. Chudasama, H. Patel, K. Prajapati, K. P. Upla, R. Ramachandra, K. Raja, and C. Busch, “Therisurnet-a computationally efficient thermal image super-resolution network,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, 2020, pp. 86–87.
|
[44] |
K. Bhalla, D. Koundal, S. Bhatia, M. K. Imam Rahmani, and M. Tahir, “Fusion of infrared and visible images using fuzzy based siamese convolutional network,” Computers, Materials & Continua, vol. 70, no. 3, 2022.
|
[45] |
H. Jiang and Z. Chen, “Flexible window-based self-attention transformer in thermal image super-resolution,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2024, pp. 3076–3085.
|
[46] |
Y. Wang, F. Liu, K. Zhang, G. Hou, Z. Sun, and T. Tan, “Lfnet: A novel bidirectional recurrent convolutional neural network for light-field image super-resolution,” IEEE Trans. Image Processing, vol. 27, no. 9, pp. 4274–4286, 2018. doi: 10.1109/TIP.2018.2834819
|
[47] |
Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. So Kweon, “Learning a deep convolutional network for light-field image super-resolution,” in Proc. the IEEE international conference on computer vision workshops, 2015, pp. 24–32.
|
[48] |
G. Liu, H. Yue, K. Li, and J. Yang, “Adaptive pixel aggregation for joint spatial and angular super-resolution of light field images,” Information Fusion, vol. 104, p. 102183, 2024. doi: 10.1016/j.inffus.2023.102183
|
[49] |
Z. Liang, Y. Wang, L. Wang, J. Yang, and S. Zhou, “Light field image super-resolution with transformers,” IEEE Signal Processing Letters, vol. 29, pp. 563–567, 2022. doi: 10.1109/LSP.2022.3146798
|
[50] |
A. Kar, S. Nehra, J. Mukherjee, and P. K. Biswas, “Adapting the learning models of single image super-resolution into light-field imaging,” IEEE Trans. Computational Imaging, vol. 10, pp. 496–509, 2024. doi: 10.1109/TCI.2024.3380348
|
[51] |
S. Zhang and E. Y. Lam, “Light field image restoration via latent diffusion and multi-view attention,” IEEE Signal Processing Letters, vol. 31, pp. 1094–1098, 2024. doi: 10.1109/LSP.2024.3383798
|
[52] |
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
|
[53] |
J. Gao, Z. Shi, G. Wang, J. Li, Y. Yuan, S. Ge, and X. Zhou, “Accurate temporal action proposal generation with relation-aware pyramid network,” in Proc. the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 10810–10817.
|
[54] |
M.-G. Gan and Y. Zhang, “Content temporal relation network for temporal action proposal generation,” Pattern Recognition, vol. 149, p. 110245, 2024. doi: 10.1016/j.patcog.2023.110245
|
[55] |
H. Wu, Z. Zhao, and Z. Wang, “Meta-unet: Multi-scale efficient transformer attention unet for fast and high-accuracy polyp segmentation,” IEEE Trans. Automation Science and Engineering, pp. 1–12, 2023.
|
[56] |
J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” in Proc. the IEEE/CVF international conference on computer vision, 2021, pp. 1833–1844.
|
[57] |
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
|
[58] |
G. Wu, Y. Wang, Y. Liu, L. Fang, and T. Chai, “Spatial-angular attention network for light field reconstruction,” IEEE Trans. Image Processing, vol. 30, pp. 8999–9013, 2021. doi: 10.1109/TIP.2021.3122089
|
[59] |
W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proc. the IEEE conference on computer vision and pattern recognition, 2016, pp. 1874–1883.
|
[60] |
X. Wang, J. Ma, and J. Jiang, “Contrastive learning for blind super-resolution via a distortion-specific network,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 1, pp. 78–89, 2022.
|
[61] |
G. Jocher, A. Chaurasia, and J. Qiu, “Ultralytics YOLO,” Jan. 2023. [Online]. Available: https://github.com/ultralytics/ultralytics
|
[62] |
B. Yuan, Y. Jiang, K. Fu, and Q. Zhao, “Parallax-aware network for light field salient object detection,” IEEE Signal Processing Letters, vol. 31, pp. 810–814, 2024. doi: 10.1109/LSP.2024.3374079
|