A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Y. Li, Y. Wu, G. Cheng, C. Tao, B. Dang, Y. Wang, J. Zhang, C. Zhang, Y. Liu, X. Tang, J. Ma, and Y. Zhang, “MEET: A million-scale dataset for fine-grained geospatial scene classification with zoom-free remote sensing imagery,” IEEE/CAA J. Autom. Sinica, 2025. doi: 10.1109/JAS.2025.125324
Citation: Y. Li, Y. Wu, G. Cheng, C. Tao, B. Dang, Y. Wang, J. Zhang, C. Zhang, Y. Liu, X. Tang, J. Ma, and Y. Zhang, “MEET: A million-scale dataset for fine-grained geospatial scene classification with zoom-free remote sensing imagery,” IEEE/CAA J. Autom. Sinica, 2025. doi: 10.1109/JAS.2025.125324

MEET: A Million-Scale Dataset for Fine-Grained Geospatial Scene Classification With Zoom-Free Remote Sensing Imagery

doi: 10.1109/JAS.2025.125324
Funds:  This work was supported by the National Natural Science Foundation of China (42030102, 42371321)
More Information
  • Accurate fine-grained geospatial scene classification using remote sensing imagery is essential for a wide range of applications. However, existing approaches often rely on manually zooming remote sensing images at different scales to create typical scene samples. This approach fails to adequately support the fixed-resolution image interpretation requirements in real-world scenarios. To address this limitation, we introduce the Million-scale finE-grained geospatial scEne classification dataseT (MEET), which contains over 1.03 million zoom-free remote sensing scene samples, manually annotated into 80 fine-grained categories. In MEET, each scene sample follows a scene-in-scene layout, where the central scene serves as the reference, and auxiliary scenes provide crucial spatial context for fine-grained classification. Moreover, to tackle the emerging challenge of scene-in-scene classification, we present the Context-Aware Transformer (CAT), a model specifically designed for this task, which adaptively fuses spatial context to accurately classify the scene samples. CAT adaptively fuses spatial context to accurately classify the scene samples by learning attentional features that capture the relationships between the center and auxiliary scenes. Based on MEET, we establish a comprehensive benchmark for fine-grained geospatial scene classification, evaluating CAT against 11 competitive baselines. The results demonstrate that CAT significantly outperforms these baselines, achieving a 1.88% higher balanced accuracy (BA) with the Swin-Large backbone, and a notable 7.87% improvement with the Swin-Huge backbone. Further experiments validate the effectiveness of each module in CAT and show the practical applicability of CAT in the urban functional zone mapping. The source code and dataset will be publicly available at https://jerrywyn.github.io/project/MEET.html.

     

  • loading
  • [1]
    J. Xie, N. He, L. Fang, and A. Plaza, “Scale-free convolutional neural network for remote sensing scene classification,” IEEE Trans. Geoscience and Remote Sensing, vol. 57, no. 9, pp. 6916–6928, 2019. doi: 10.1109/TGRS.2019.2909695
    [2]
    X. Tang, Q. Ma, X. Zhang, F. Liu, J. Ma, and L. Jiao, “Attention consistent network for remote sensing scene classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 2030–2045, 2021. doi: 10.1109/JSTARS.2021.3051569
    [3]
    H. Sun, S. Li, X. Zheng, and X. Lu, “Remote sensing scene classification by gated bidirectional network,” IEEE Trans. Geoscience and Remote Sensing, vol. 58, no. 1, pp. 82–96, 2019.
    [4]
    Q. Zou, L. Ni, T. Zhang, and Q. Wang, “Deep learning based feature selection for remote sensing scene classification,” IEEE Geoscience and remote sensing letters, vol. 12, no. 11, pp. 2321–2325, 2015. doi: 10.1109/LGRS.2015.2475299
    [5]
    S. R. Phinn, C. M. Roelfsema, and P. J. Mumby, “Multiscale, object-based image analysis for mapping geomorphic and ecological zones on coral reefs,” Int. Journal of Remote Sensing, vol. 33, no. 12, pp. 3768–3797, 2012. doi: 10.1080/01431161.2011.633122
    [6]
    N. B. Mishra and K. A. Crews, “Mapping vegetation morphology types in a dry savanna ecosystem: Integrating hierarchical object-based image analysis with random forest,” Int. Journal of Remote Sensing, vol. 35, no. 3, pp. 1175–1198, 2014. doi: 10.1080/01431161.2013.876120
    [7]
    Z. Yang, H. Yu, M. Feng, W. Sun, X. Lin, M. Sun, Z.-H. Mao, and A. Mian, “Small object augmentation of urban scenes for real-time semantic segmentation,” IEEE Trans. Image Processing, vol. 29, pp. 5175–5190, 2020. doi: 10.1109/TIP.2020.2976856
    [8]
    P. Gamba, “Human settlements: A global challenge for eo data processing and interpretation,” Proc. the IEEE, vol. 101, no. 3, pp. 570–581, 2012.
    [9]
    T. R. Martha, N. Kerle, C. J. Van Westen, V. Jetten, and K. V. Kumar, “Segment optimization and data-driven thresholding for knowledge-based landslide detection by object-based image analysis,” IEEE transactions on geoscience and remote sensing, vol. 49, no. 12, pp. 4928–4943, 2011. doi: 10.1109/TGRS.2011.2151866
    [10]
    G. Cheng, L. Guo, T. Zhao, J. Han, H. Li, and J. Fang, “Automatic landslide detection from remote-sensing imagery using a scene classification method based on bovw and plsa,” Int. Journal of Remote Sensing, vol. 34, no. 1, pp. 45–59, 2013. doi: 10.1080/01431161.2012.705443
    [11]
    G. Fu, C. Liu, R. Zhou, T. Sun, and Q. Zhang, “Classification for high resolution remote sensing imagery using a fully convolutional network,” Remote Sensing, vol. 9, no. 5, p. 498, 2017. doi: 10.3390/rs9050498
    [12]
    X.-Y. Tong, G.-S. Xia, Q. Lu, H. Shen, S. Li, S. You, and L. Zhang, “Land-cover classification with high-resolution remote sensing images using transferable deep models,” Remote Sensing of Environment, vol. 237, p. 111322, 2020. doi: 10.1016/j.rse.2019.111322
    [13]
    Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong et al., “Swin transformer v2: Scaling up capacity and resolution,” in Proc. the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 12009–12019.
    [14]
    X. Guo, J. Lao, B. Dang, Y. Zhang, L. Yu, L. Ru, L. Zhong, Z. Huang, K. Wu, D. Hu et al., “Skysense: A multi-modal remote sensing foundation model towards universal interpretation for earth observation imagery,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2024, pp. 27672–27683.
    [15]
    Y. Long, G.-S. Xia, S. Li, W. Yang, M. Y. Yang, X. X. Zhu, L. Zhang, and D. Li, “On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid,” IEEE Journal of selected topics in applied earth observations and remote sensing, vol. 14, pp. 4205–4230, 2021. doi: 10.1109/JSTARS.2021.3070368
    [16]
    G. Cheng, J. Han, and X. Lu, “Remote sensing image scene classification: Benchmark and state of the art,” Proc. the IEEE, vol. 105, no. 10, pp. 1865–1883, 2017. doi: 10.1109/JPROC.2017.2675998
    [17]
    G.-S. Xia, J. Hu, F. Hu, B. Shi, X. Bai, Y. Zhong, L. Zhang, and X. Lu, “Aid: A benchmark data set for performance evaluation of aerial scene classification,” IEEE Trans. Geoscience and Remote Sensing, vol. 55, no. 7, pp. 3965–3981, 2017. doi: 10.1109/TGRS.2017.2685945
    [18]
    F. Hu, W. Yang, J. Chen, and H. Sun, “Tile-level annotation of satellite images using multi-level max-margin discriminative random field,” Remote Sensing, vol. 5, no. 5, pp. 2275–2291, 2013. doi: 10.3390/rs5052275
    [19]
    Y. Li, X. Huang, and H. Liu, “Unsupervised deep feature learning for urban village detection from high-resolution remote sensing images,” Photogrammetric Engineering & Remote Sensing, vol. 83, no. 8, pp. 567–579, 2017.
    [20]
    Y. Huang, F. Zhang, Y. Gao, W. Tu, F. Duarte, C. Ratti, D. Guo, and Y. Liu, “Comprehensive urban space representation with varying numbers of street-level images,” Computers, Environment and Urban Systems, vol. 106, p. 102043, 2023. doi: 10.1016/j.compenvurbsys.2023.102043
    [21]
    C. Xiao, J. Zhou, J. Huang, H. Zhu, T. Xu, D. Dou, and H. Xiong, “A contextual master-slave framework on urban region graph for urban village detection,” in 2023 IEEE 39th Int. Conf. on Data Engineering (ICDE). IEEE, 2023, pp. 736–748.
    [22]
    W. Lu, C. Tao, H. Li, J. Qi, and Y. Li, “A unified deep learning framework for urban functional zone extraction based on multi-source heterogeneous data,” Remote Sensing of Environment, vol. 270, p. 112830, 2022. doi: 10.1016/j.rse.2021.112830
    [23]
    Y. Yang and S. Newsam, “Bag-of-visual-words and spatial extensions for land-use classification,” in Proc. the 18th SIGSPATIAL international conference on advances in geographic information systems, 2010, pp. 270–279.
    [24]
    G.-S. Xia, W. Yang, J. Delon, Y. Gousseau, H. Sun, and H. Maître, “Structural high-resolution satellite image indexing,” in ISPRS TC VⅡ Symposium-100 Years ISPRS, vol. 38, 2010, pp. 298–303.
    [25]
    S. Basu, S. Ganguly, S. Mukhopadhyay, R. DiBiano, M. Karki, and R. Nemani, “Deepsat: a learning framework for satellite imagery,” in Proc. the 23rd SIGSPATIAL international conference on advances in geographic information systems, 2015, pp. 1–10.
    [26]
    O. A. Penatti, K. Nogueira, and J. A. Dos Santos, “Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?” in Proc. the IEEE conference on computer vision and pattern recognition workshops, 2015, pp. 44–51.
    [27]
    L. Zhao, P. Tang, and L. Huo, “Feature significance-based multibag-of-visual-words model for remote sensing image scene classification,” Journal of Applied Remote Sensing, vol. 10, no. 3, pp. 035004–035004, 2016. doi: 10.1117/1.JRS.10.035004
    [28]
    Q. Zhu, Y. Zhong, B. Zhao, G.-S. Xia, and L. Zhang, “Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 6, pp. 747–751, 2016. doi: 10.1109/LGRS.2015.2513443
    [29]
    Z. Xiao, Y. Long, D. Li, C. Wei, G. Tang, and J. Liu, “High-resolution remote sensing image retrieval based on cnns from a dimensional perspective,” Remote Sensing, vol. 9, no. 7, p. 725, 2017. doi: 10.3390/rs9070725
    [30]
    P. Helber, B. Bischke, A. Dengel, and D. Borth, “Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 7, pp. 2217–2226, 2019. doi: 10.1109/JSTARS.2019.2918242
    [31]
    W. Zhou, S. Newsam, C. Li, and Z. Shao, “Patternnet: A benchmark dataset for performance evaluation of remote sensing image retrieval,” ISPRS journal of photogrammetry and remote sensing, vol. 145, pp. 197–209, 2018. doi: 10.1016/j.isprsjprs.2018.01.004
    [32]
    Q. Wang, S. Liu, J. Chanussot, and X. Li, “Scene classification with recurrent attention of vhr remote sensing images,” IEEE Trans. Geoscience and Remote Sensing, vol. 57, no. 2, pp. 1155–1167, 2018.
    [33]
    G. Sumbul, M. Charfuelan, B. Demir, and V. Markl, “Bigearthnet: A large-scale benchmark archive for remote sensing image understanding,” in IGARSS 2019- 2019 IEEE Int. Geoscience and Remote Sensing Symposium. IEEE, 2019, pp. 5901–5904.
    [34]
    H. Li, X. Dou, C. Tao, Z. Wu, J. Chen, J. Peng, M. Deng, and L. Zhao, “Rsi-cb: A large-scale remote sensing image classification benchmark using crowdsourced data,” Sensors, vol. 20, no. 6, p. 1594, 2020. doi: 10.3390/s20061594
    [35]
    X. Qi, P. Zhu, Y. Wang, L. Zhang, J. Peng, M. Wu, J. Chen, X. Zhao, N. Zang, and P. T. Mathiopoulos, “Mlrsnet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 169, pp. 337–350, 2020. doi: 10.1016/j.isprsjprs.2020.09.020
    [36]
    H. Li, H. Jiang, X. Gu, J. Peng, W. Li, L. Hong, and C. Tao, “Clrs: Continual learning benchmark for remote sensing image scene classification,” Sensors, vol. 20, no. 4, p. 1226, 2020. doi: 10.3390/s20041226
    [37]
    Y. Li, D. Kong, Y. Zhang, Y. Tan, and L. Chen, “Robust deep alignment network with remote sensing knowledge graph for zero-shot and generalized zero-shot remote sensing image scene classification,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 179, pp. 145–158, 2021. doi: 10.1016/j.isprsjprs.2021.08.001
    [38]
    Y. Hua, L. Mou, P. Jin, and X. X. Zhu, “Multiscene: A large-scale dataset and benchmark for multiscene recognition in single aerial images,” IEEE Trans. Geoscience and Remote Sensing, vol. 60, pp. 1–13, 2021.
    [39]
    J. Yuan, L. Ru, S. Wang, and C. Wu, “Wh-mavs: A novel dataset and deep learning benchmark for multiple land use and land cover applications,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 1575–1590, 2022. doi: 10.1109/JSTARS.2022.3142898
    [40]
    F. Fang, L. Zeng, S. Li, D. Zheng, J. Zhang, Y. Liu, and B. Wan, “Spatial context-aware method for urban land use classification using street view images,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 192, pp. 1–12, 2022. doi: 10.1016/j.isprsjprs.2022.07.020
    [41]
    R. Minetto, M. P. Segundo, and S. Sarkar, “Hydra: An ensemble of convolutional neural networks for geospatial land classification,” IEEE Trans. Geoscience and Remote Sensing, vol. 57, no. 9, pp. 6530–6541, 2019. doi: 10.1109/TGRS.2019.2906883
    [42]
    H. C. Wittich, M. Seeland, J. Wäldchen, M. Rzanny, and P. Mäder, “Recommending plant taxa for supporting on-site species identification,” BMC bioinformatics, vol. 19, pp. 1–17, 2018. doi: 10.1186/s12859-017-2006-0
    [43]
    Y. Li, L. Wang, T. Wang, X. Yang, J. Luo, Q. Wang, Y. Deng, W. Wang, X. Sun, H. Li, et al, “Star: A firstever dataset and a large-scale benchmark for scene graph generation in large-size satellite imagery,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 47, no. 3, pp. 1832–1849, 2025. doi: 10.1109/TPAMI.2024.3508072
    [44]
    W. Yu, P. Zhou, S. Yan, and X. Wang, “Inceptionnext: When inception meets convnext,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2024, pp. 5672–5683.
    [45]
    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
    [46]
    K. Sun, B. Xiao, D. Liu, and J. Wang, “Deep highresolution representation learning for human pose estimation,” in Proc. the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5693–5703.
    [47]
    Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, and Y. Li, “Maxvit: Multi-axis vision transformer,” in European conference on computer vision. Springer, 2022, pp. 459–479.
    [48]
    M. Ding, B. Xiao, N. Codella, P. Luo, J. Wang, and L. Yuan, “Davit: Dual attention vision transformers,” in European conference on computer vision. Springer, 2022, pp. 74–92.
    [49]
    Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proc. the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.
    [50]
    Y. Liu, J. Zhou, W. Qi, X. Li, L. Gross, Q. Shao, Z. Zhao, L. Ni, X. Fan, and Z. Li, “Arc-net: An efficient network for building extraction from high-resolution aerial images,” Ieee Access, vol. 8, pp. 154997–155010, 2020. doi: 10.1109/ACCESS.2020.3015701
    [51]
    L. Bai, Q. Liu, C. Li, Z. Ye, M. Hui, and X. Jia, “Remote sensing image scene classification using multiscale feature fusion covariance network with octave convolution,” IEEE Trans. Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2022.
    [52]
    W. Chen, S. Ouyang, W. Tong, X. Li, X. Zheng, and L. Wang, “Gcsanet: A global context spatial attention deep learning network for remote sensing scene classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 1150–1162, 2022. doi: 10.1109/JSTARS.2022.3141826
    [53]
    H. Song, Y. Yuan, Z. Ouyang, Y. Yang, and H. Xiang, “Quantitative regularization in robust vision transformer for remote sensing image classification,” The Photogrammetric Record, vol. 39, no. 186, pp. 340–372, 2024. doi: 10.1111/phor.12489
    [54]
    F. Hu, G.-S. Xia, J. Hu, and L. Zhang, “Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery,” Remote Sensing, vol. 7, no. 11, pp. 14680–14707, 2015. doi: 10.3390/rs71114680
    [55]
    E. Li, J. Xia, P. Du, C. Lin, and A. Samat, “Integrating multilayer features of convolutional neural networks for remote sensing scene classification,” IEEE Trans. Geoscience and Remote Sensing, vol. 55, no. 10, pp. 5653–5665, 2017. doi: 10.1109/TGRS.2017.2711275
    [56]
    S. Chaib, H. Liu, Y. Gu, and H. Yao, “Deep feature fusion for vhr remote sensing scene classification,” IEEE Trans. Geoscience and Remote Sensing, vol. 55, no. 8, pp. 4775–4784, 2017. doi: 10.1109/TGRS.2017.2700322
    [57]
    K. Xu, H. Huang, P. Deng, and G. Shi, “Two-stream feature aggregation deep neural network for scene classification of remote sensing images,” Information Sciences, vol. 539, pp. 250–268, 2020. doi: 10.1016/j.ins.2020.06.011
    [58]
    J. Fang, Y. Yuan, X. Lu, and Y. Feng, “Robust space-frequency joint representation for remote sensing image scene classification,” IEEE Trans. Geoscience and Remote Sensing, vol. 57, no. 10, pp. 7492–7502, 2019. doi: 10.1109/TGRS.2019.2913816
    [59]
    Z. Xiong, Y. Wang, F. Zhang, A. J. Stewart, J. Hanna, D. Borth, I. Papoutsis, B. L. Saux, G. Camps-Valls, and X. X. Zhu, “Neural plasticity-inspired foundation model for observing the earth crossing modalities,” arXiv preprint arXiv: 2403.15356, 2024.
    [60]
    S. Srivastava, J. E. Vargas Munoz, S. Lobry, and D. Tuia, “Fine-grained landuse characterization using groundbased pictures: a deep learning solution based on globally available data,” Int. Journal of Geographical Information Science, vol. 34, no. 6, pp. 1117–1136, 2020. doi: 10.1080/13658816.2018.1542698
    [61]
    Y. Yao, X. Yan, P. Luo, Y. Liang, S. Ren, Y. Hu, J. Han, and Q. Guan, “Classifying land-use patterns by integrating time-series electricity data and high-spatial resolution remote sensing imagery,” Int. Journal of Applied Earth Observation and Geoinformation, vol. 106, p. 102664, 2022. doi: 10.1016/j.jag.2021.102664
    [62]
    C. Arbinger, M. Bullin, and A. Henrich, “Exploiting geodata to improve image recognition with deep learning,” in Companion Proc. the Web Conf. 2022, 2022, pp. 648–655.
    [63]
    Y. Li, W. Chen, X. Huang, Z. Gao, S. Li, T. He, and Y. Zhang, “Mfvnet: A deep adaptive fusion network with multiple field-of-views for remote sensing image semantic segmentation,” Science China Information Sciences, vol. 66, no. 4, p. 140305, 2023. doi: 10.1007/s11432-022-3599-y
    [64]
    W. Chen, Z. Jiang, Z. Wang, K. Cui, and X. Qian, “Collaborative global-local networks for memory-efficient segmentation of ultra-high resolution images,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), June 2019.
    [65]
    H. K. Cheng, J. Chung, Y.-W. Tai, and C.-K. Tang, “Cascadepsp: Toward class-agnostic and very high-resolution segmentation via global and local refinement,” in Proc. the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8890–8899.
    [66]
    Y. Li, J. Luo, Y. Zhang, Y. Tan, J.-G. Yu, and S. Bai, “Learning to holistically detect bridges from large-size vhr remote sensing imagery,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 46, no. 12, pp. 11507–11523, 2024. doi: 10.1109/TPAMI.2024.3393024
    [67]
    Z. Bai, G. Li, and Z. Liu, “Global-local-global contextaware network for salient object detection in optical remote sensing images,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 198, pp. 184–196, 2023. doi: 10.1016/j.isprsjprs.2023.03.013
    [68]
    Y. Liu, S. Shi, J. Wang, and Y. Zhong, “Seeing beyond the patch: Scale-adaptive semantic segmentation of high-resolution remote sensing imagery based on reinforcement learning,” in Proc. the IEEE/CVF Int. Conf. on Computer Vision, 2023, pp. 16868–16878.
    [69]
    L. Zhang, Z. Tan, G. Zhang, W. Zhang, and Z. Li, “Learn more and learn useful: Truncation compensation network for semantic segmentation of high-resolution remote sensing images,” IEEE Trans. Geoscience and Remote Sensing, 2024.
    [70]
    D. Wang, Q. Zhang, Y. Xu, J. Zhang, B. Du, D. Tao, and L. Zhang, “Advancing plain vision transformer toward remote sensing foundation model,” IEEE Trans. Geoscience and Remote Sensing, vol. 61, pp. 1–15, 2022.
    [71]
    D. Wang, J. Zhang, M. Xu, L. Liu, D. Wang, E. Gao, C. Han, H. Guo, B. Du, D. Tao et al., “Mtp: Advancing remote sensing foundation model via multi-task pretraining,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2024.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(15)  / Tables(6)

    Article Metrics

    Article views (17) PDF downloads(6) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return