A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 11 Issue 4
Apr.  2024

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
M. V. Luzón, N. Rodríguez-Barroso, A. Argente-Garrido, D. Jiménez-López, J. M. Moyano, J. Del Ser, W. Ding, and  F. Herrera,  “A tutorial on federated learning from theory to practice: Foundations, software frameworks, exemplary use cases, and selected trends,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 4, pp. 824–850, Apr. 2024. doi: 10.1109/JAS.2024.124215
Citation: M. V. Luzón, N. Rodríguez-Barroso, A. Argente-Garrido, D. Jiménez-López, J. M. Moyano, J. Del Ser, W. Ding, and  F. Herrera,  “A tutorial on federated learning from theory to practice: Foundations, software frameworks, exemplary use cases, and selected trends,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 4, pp. 824–850, Apr. 2024. doi: 10.1109/JAS.2024.124215

A Tutorial on Federated Learning from Theory to Practice: Foundations, Software Frameworks, Exemplary Use Cases, and Selected Trends

doi: 10.1109/JAS.2024.124215
Funds:  This work was partially supported by the R&D&I, Spain grants PID2020-119478GB-I00 and, PID2020-115832GB-I00 funded by MCIN/AEI/10.13039/501100011033. N. Rodríguez-Barroso was supported by the grant FPU18/04475 funded by MCIN/AEI/10.13039/501100011033 and by “ESF Investing in your future”, Spain. J. Moyano was supported by a postdoctoral Juan de la Cierva Formación grant FJC2020-043823-I funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR. J. Del Ser acknowledges funding support from the Spanish Centro para el Desarrollo Tecnológico Industrial (CDTI) through the AI4ES project, as well as from the Department of Education of the Basque Government (consolidated research group MATHMODE, IT1456-22)
More Information
  • When data privacy is imposed as a necessity, Federated learning (FL) emerges as a relevant artificial intelligence field for developing machine learning (ML) models in a distributed and decentralized environment. FL allows ML models to be trained on local devices without any need for centralized data transfer, thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties. This paradigm has gained momentum in the last few years, spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources. By virtue of FL, models can be learned from all such distributed data sources while preserving data privacy. The aim of this paper is to provide a practical tutorial on FL, including a short methodology and a systematic analysis of existing software frameworks. Furthermore, our tutorial provides exemplary cases of study from three complementary perspectives: i) Foundations of FL, describing the main components of FL, from key elements to FL categories; ii) Implementation guidelines and exemplary cases of study, by systematically examining the functionalities provided by existing software frameworks for FL deployment, devising a methodology to design a FL scenario, and providing exemplary cases of study with source code for different ML approaches; and iii) Trends, shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape. The ultimate purpose of this work is to establish itself as a referential work for researchers, developers, and data scientists willing to explore the capabilities of FL in practical applications.

     

  • loading
  • [1]
    Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015. doi: 10.1038/nature14539
    [2]
    T. Goldstein, “Challenges for machine learning on distributed platforms (invited talk),” in Proc. 32nd Int. Symp. Distributed Computing, New Orleans, USA, 2018, pp. 2:1–2:3.
    [3]
    J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv: 1610.05492, 2016.
    [4]
    Q. Yang, Y. Liu, Y. Cheng, Y. Kang, T. Chen, and H. Yu, Federated Learning. Cham, Germany: Springer, 2019.
    [5]
    J. Zhu, J. Cao, D. Saxena, S. Jiang, and H. Ferradi, “Blockchain-empowered federated learning: Challenges, solutions, and future directions,” ACM Comput. Surv., vol. 55, no. 11, p. 240, Nov. 2023.
    [6]
    D. Chen, X. Jiang, H. Zhong, and J. Cui, “Building trusted federated learning: Key technologies and challenges,” J. Sens. Actuator Netw., vol. 12, no. 1, p. 13, Feb. 2023. doi: 10.3390/jsan12010013
    [7]
    R. Zong, Y. Qin, F. Wu, Z. Tang, and K. Li, “Fedcs: Efficient communication scheduling in decentralized federated learning,” Inf. Fusion, vol. 102, p. 102028, Feb. 2024. doi: 10.1016/j.inffus.2023.102028
    [8]
    L. Li, Y. Fan, M. Tse, and K.-Y. Lin, “A review of applications in federated learning,” Comput. Ind. Eng., vol. 149, p. 106854, Nov. 2020. doi: 10.1016/j.cie.2020.106854
    [9]
    European Commission, “Machine learning ledger orchestration for drug discovery,” 2019. [Online]. Available: https://cordis.europa.eu/project/id/831472.
    [10]
    B. Mahesh, “Machine learning algorithms–A review,” Int. J. Sci. Res., vol. 9, no. 1, pp. 381–386, Jan. 2020.
    [11]
    D. Wang, W. Yao, T. Jiang, G. Tang, and X. Chen, “A survey on physical adversarial attack in computer vision,” arXiv preprint arXiv: 2209.14262, 2022.
    [12]
    K. Abouelmehdi, A. Beni-Hssane, H. Khaloufi, and M. Saadi, “Big data security and privacy in healthcare: A review,” Procedia Comput. Sci., vol. 113, pp. 73–80, Dec. 2017. doi: 10.1016/j.procs.2017.08.292
    [13]
    European Commission, “High-level expert group on artificial intelligence, ethics guidelines for trustworthy AI,” European Union, 2019. [Online]. Available: https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai.
    [14]
    B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. 20th Int. Conf. Artificial Intelligence and Statistics, Fort Lauderdale, USA, 2017, pp. 1273–1282.
    [15]
    V. Sze, Y.-H. Chen, J. Emer, A. Suleiman, and Z. Zhang, “Hardware for machine learning: Challenges and opportunities,” in Proc. IEEE Custom Integrated Circuits Conf., Austin, USA, 2017, pp. 1–8.
    [16]
    C.-W. Tsai, C.-F. Lai, H.-C. Chao, and A. V. Vasilakos, “Big data analytics: A survey,” J. Big Data, vol. 2, no. 1, p. 21, Oct. 2015. doi: 10.1186/s40537-015-0030-3
    [17]
    M. Marjani, F. Nasaruddin, A. Gani, A. Karim, I. A. T. Hashem, A. Siddiqa, and I. Yaqoob, “Big IoT data analytics: Architecture, opportunities, and open research challenges,” IEEE Access, vol. 5, pp. 5247–5261, Mar. 2017. doi: 10.1109/ACCESS.2017.2689040
    [18]
    R. Raina, A. Madhavan, and A. Y. Ng, “Large-scale deep unsupervised learning using graphics processors,” in Proc. 26th Ann. Int. Conf. Machine Learning, Montreal, Canada, 2009, pp. 873–880.
    [19]
    Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-IID data,” arXiv preprint arXiv: 1806.00582, 2018.
    [20]
    K. K. Coelho, M. Nogueira, A. B. Vieira, E. F. Silva, and J. A. M. Nacif, “A survey on federated learning for security and privacy in healthcare applications,” Comput. Commun., vol. 207, pp. 113–127, Jul. 2023. doi: 10.1016/j.comcom.2023.05.012
    [21]
    A. Chaddad, Q. Lu, J. Li, Y. Katib, R. Kateb, C. Tanougast, A. Bouridane, and A. Abdulkadir, “Explainable, domain-adaptive, and federated artificial intelligence in medicine,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 4, pp. 859–876, Apr. 2023. doi: 10.1109/JAS.2023.123123
    [22]
    S. Naz, K. T. Phan, and Y.-P. P. Chen, “A comprehensive review of federated learning for COVID-19 detection,” Int. J. Intell. Syst., vol. 37, no. 3, pp. 2371–2392, Mar. 2022. doi: 10.1002/int.22777
    [23]
    X. Han, H. Yu, and H. Gu, “Visual inspection with federated learning,” in Proc. 16th Int. Conf. Image Analysis and Recognition, Waterloo, Canada, 2019, pp. 52–64.
    [24]
    N. I. Mowla, N. H. Tran, I. Doh, and K. Chae, “Federated learning-based cognitive detection of jamming attack in flying ad-hoc network,” IEEE Access, vol. 8, pp. 4338–4350, 2020. doi: 10.1109/ACCESS.2019.2962873
    [25]
    N. Rodríguez-Barroso, G. Stipcich, D. Jiménez-López, J. A. Ruiz- Millán, E. Martínez-Cámara, G. González-Seco, M. V. Luzón, M. A. Veganzones, and F. Herrera, “Federated learning and differential privacy: Software tools analysis, the sherpa.ai FL framework and methodological guidelines for preserving data privacy,” Inf. Fusion, vol. 64, pp. 270–292, Dec. 2020. doi: 10.1016/j.inffus.2020.07.009
    [26]
    M. F. Criado, F. E. Casado, R. Iglesias, C. V. Regueiro, and S. Barro, “Non-ⅡD data and continual learning processes in federated learning: A long road ahead,” Inf. Fusion, vol. 88, pp. 263–280, Dec. 2022. doi: 10.1016/j.inffus.2022.07.024
    [27]
    N. Rodríguez-Barroso, D. Jiménez-López, M. V. Luzón, F. Herrera, and E. Martínez-Cámara, “Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges,” Inf. Fusion, vol. 90, pp. 148–173, Feb. 2023. doi: 10.1016/j.inffus.2022.09.011
    [28]
    Y. Liu, Y. Kang, C. Xing, T. Chen, and Q. Yang, “A secure federated transfer learning framework,” IEEE Intell. Syst., vol. 35, no. 4, pp. 70–82, Jul.–Aug. 2020. doi: 10.1109/MIS.2020.2988525
    [29]
    C. Zhang, S. Li, J. Xia, W. Wang, F. Yan, and Y. Liu, “BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning,” in Proc. USENIX Ann. Technical Conf., 2020, pp. 33.
    [30]
    J. H. Bell, K. A. Bonawitz, A. Gascón, T. Lepoint, and M. Raykova, “Secure single-server aggregation with (poly)logarithmic overhead,” in Proc. ACM SIGSAC Conf. Computer and Communications Security, 2020, pp. 1253–1269.
    [31]
    C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Proc. 3rd Theory of Cryptography Conf., New York, USA, 2006, pp. 265–284.
    [32]
    T. Wang, X. Zhang, J. Feng, and X. Yang, “A comprehensive survey on local differential privacy toward data statistics and analysis,” Sensors, vol. 20, no. 24, p. 7030, Dec. 2020. doi: 10.3390/s20247030
    [33]
    M. Naseri, J. Hayes, and E. De Cristofaro, “Local and central differential privacy for robustness and privacy in federated learning,” in Proc. 29th Annu. Network and Distributed System Security Symp., San Diego, USA, 2022, pp. 24–28.
    [34]
    R. Eriguchi, A. Ichikawa, N. Kunihiro, and K. Nuida, “Efficient noise generation to achieve differential privacy with applications to secure multiparty computation,” in Proc. 25th Int. Conf. Financial Cryptography and Data Security, 2021, pp. 271–290.
    [35]
    S. Caldas, S. M. K. Duddu, P. Wu, T. Li, J. Konečnỳ, H. B. McMahan, V. Smith, and A. Talwalkar, “LEAF: A benchmark for federated settings,” in Proc. 33rd Conf. Neural Information Processing Systems, Vancouver, Canada, 2019.
    [36]
    The TensorFlow Federated Authors (Google), “Tensorflow federated,” 2018.
    [37]
    N. Rodríguez-Barroso, E. Martínez-Cámara, M. V. Luzón, and F. Herrera, “Backdoor attacks-resilient aggregation based on robust filtering of outliers in federated learning for image classification,” Knowl.-Based Syst., vol. 245, p. 108588, Jun. 2022. doi: 10.1016/j.knosys.2022.108588
    [38]
    L. Qu, Y. Zhou, P. P. Liang, Y. Xia, F. Wang, E. Adeli, L. Fei-Fei, and D. Rubin, “Rethinking architecture design for tackling data heterogeneity in federated learning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, 2022, pp. 10051–10061.
    [39]
    Y. Zheng, S. Lai, Y. Liu, X. Yuan, X. Yi, and C. Wang, “Aggregation service for federated learning: An efficient, secure, and more resilient realization,” IEEE Trans. Depend. Secure Comput., vol. 20, no. 2, pp. 988–1001, Mar–Apr. 2023. doi: 10.1109/TDSC.2022.3146448
    [40]
    L. Collins, H. Hassani, A. Mokhtari, and S. Shakkottai, “Exploiting shared representations for personalized federated learning,” in Proc. 38th Int. Conf. Machine Learning, 2021, pp. 2089–2099.
    [41]
    Z. Charles, Z. Garrett, Z. Huo, S. Shmulyian, and V. Smith, “On large-cohort training for federated learning,” in Proc. 35th Conf. Neural Information Processing Systems, 2021, pp. 20461–20475.
    [42]
    C. He, M. Annavaram, and S. Avestimehr, “Group knowledge transfer: Federated learning of large CNNs at the edge,” in Proc. 34th Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, pp. 14068–14080.
    [43]
    H. Wang, Z. Kaplan, D. Niu, and B. Li, “Optimizing federated learning on Non-ⅡD data with reinforcement learning,” in Proc. IEEE Conf. Computer Communications, Toronto, Canada, 2020, pp. 1698–1707.
    [44]
    Z. Lian, W. Wang, and C. Su, “COFEL: Communication-efficient and optimized federated learning with local differential privacy,” in Proc. IEEE Int. Conf. Communications, Montreal, Canada, 2021, pp. 1–6.
    [45]
    N. Rodríguez-Barroso, E. Martínez-Cámara, M. V. Luzón, and F. Herrera, “Dynamic defense against byzantine poisoning attacks in federated learning,” Future Gener. Comput. Syst., vol. 133, pp. 1–9, 2022. doi: 10.1016/j.future.2022.03.003
    [46]
    Y. Zhang, M. Duan, D. Liu, L. Li, A. Ren, X. Chen, Y. Tan, and C. Wang, “CSAFL: A clustered semi-asynchronous federated learning framework,” in Proc. Int. Joint Conf. Neural Networks, Shenzhen, China, 2021, pp. 1–10.
    [47]
    T.-M. H. Hsu, H. Qi, and M. Brown, “Federated visual classification with real-world data distribution,” in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 76–92.
    [48]
    D. Caldarola, B. Caputo, and M. Ciccone, “Improving generalization in federated learning by seeking flat minima,” in Proc. 17th European Conf. Computer Vision, Tel Aviv, Israel, 2022, pp. 654–672.
    [49]
    S. K. Prashanthi, S. A. Kesanapalli, and Y. Simmhan, “Characterizing the performance of accelerated Jetson edge devices for training deep learning models,” Proc. ACM Meas. Anal. Comput. Syst., vol. 6, no. 3, p. 44, Dec. 2022.
    [50]
    O. Marfoq, C. Xu, G. Neglia, and R. Vidal, “Throughput-optimal topology design for cross-silo federated learning,” in Proc. 34th Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, pp. 19478–19487.
    [51]
    K. R. Jayaram, A. Verma, G. Thomas, and V. Muthusamy, “Just-in-time aggregation for federated learning,” in Proc. 30th Int. Symp. Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Nice, France, 2022, pp. 1–8.
    [52]
    G. Kaissis, A. Ziller, J. Passerat-Palmbach, T. Ryffel, D. Usynin, A. Trask, I. Jr. Lima, J. Mancuso, F. Jungmann, M.-M. Steinborn, A. Saleh, M. Makowski, D. Rueckert, and R. Braren, “End-to-end privacy preserving deep learning on multi-institutional medical imaging,” Nat. Mach. Intell., vol. 3, no. 6, pp. 473–484, May 2021. doi: 10.1038/s42256-021-00337-8
    [53]
    D. Manna, H. Kasyap, and S. Tripathy, “MILSA: Model interpretation based label sniffing attack in federated learning,” in Proc. 18th Int. Conf. Information Systems Security, Tirupati, India, 2022, pp. 139–154.
    [54]
    H. Kasyap and S. Tripathy, “Privacy-preserving decentralized learning framework for healthcare system,” ACM Trans. Multimed. Comput.,Commun.,Appl., vol. 17, no. 2S, p. 68, Jun. 2021.
    [55]
    N. Agarwal, P. Kairouz, and Z. Liu, “The skellam mechanism for differentially private federated learning,” in Proc. 35th Neural Information Processing Systems, 2021, pp. 5052–5064.
    [56]
    E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in Proc. 23rd Int. Conf. Artificial Intelligence and Statistics, Palermo, Italy, 2020, pp. 2938–2948.
    [57]
    F. Lai, Y. Dai, S. S. V. Singapuram, J. Liu, X. Zhu, H. V. Madhyastha, and M. Chowdhury, “FedScale: Benchmarking model and system performance of federated learning at scale,” in Proc. 39th Int. Conf. Machine Learning, Baltimore, USA, 2022, pp. 11814–11827.
    [58]
    Z. Chai, Y. Chen, A. Anwar, L. Zhao, Y. Cheng, and H. Rangwala, “FedAT: A high-performance and communication-efficient federated learning system with asynchronous tiers,” in Proc. Int. Conf. for High Performance Computing, Networking, Storage and Analysis, St. Louis, USA, 2021, pp. 60.
    [59]
    K. Singhal, H. Sidahmed, Z. Garrett, S. Wu, J. Rush, and S. Prakash, “Federated reconstruction: Partially local federated learning,” in Proc. 35th Conf. Neural Information Processing Systems, 2021, pp. 11220–11232.
    [60]
    X. Li, Y. Hu, W. Liu, H. Feng, L. Peng, Y. Hong, K. Ren, and Z. Qin, “OpBoost: A vertical federated tree boosting framework based on order-preserving desensitization,” Proc. VLDB Endowment, vol. 16, no. 2, pp. 202–215, Oct. 2022. doi: 10.14778/3565816.3565823
    [61]
    T. Qi, F. Wu, C. Wu, L. Lyu, T. Xu, H. Liao, Z. Yang, Y. Huang, and X. Xie, “FairVFL: A fair vertical federated learning framework with contrastive adversarial learning,” in Proc. 36th Conf. Neural Information Processing Systems, New Orleans, LA, USA, 2022, pp. 7852–7865.
    [62]
    D. Cha, M. D. Sung, and Y.-R. Park, “Implementing vertical federated learning using autoencoders: Practical application, generalizability, and utility study,” JMIR Med. Inf., vol. 9, no. 6, p. e26598, Jun. 2021. doi: 10.2196/26598
    [63]
    X. Chen, S. Zhou, B. Guan, K. Yang, H. Fao, H. Wang, and Y. Wang, “Fed-EINI: An efficient and interpretable inference framework for decision tree ensembles in vertical federated learning,” in Proc. IEEE Int. Conf. Big Data, Orlando, USA, 2021, pp. 1242–1248.
    [64]
    Y. Wu, S. Cai, X. Xiao, G. Chen, and B. C. Ooi, “Privacy preserving vertical federated learning for tree-based models,” Proc. VLDB Endowment, vol. 13, no. 12, pp. 2090–2103, Aug. 2020. doi: 10.14778/3407790.3407811
    [65]
    Y. Liu, Y. Kang, C. Xing, T. Chen, and Q. Yang, “A secure federated transfer learning framework,” IEEE Intell. Syst., vol. 35, no. 4, pp. 70–82, Jul.–Aug. 2020. doi: 10.1109/MIS.2020.2988525
    [66]
    Q. Li, Y. Diao, Q. Chen, and B. He, “Federated learning on non-ⅡD data silos: An experimental study,” in Proc. IEEE 38th Int. Conf. Data Engineering, Kuala Lumpur, Malaysia, 2022, pp. 965–978.
    [67]
    H. Zhu, J. Xu, S. Liu, and Y. Jin, “Federated learning on non-ⅡD data: A survey,” Neurocomputing, vol. 465, pp. 371–390, Nov. 2021. doi: 10.1016/j.neucom.2021.07.098
    [68]
    L. Chen, S. Li, Q. Bai, J. Yang, S. Jiang, and Y. Miao, “Review of image classification algorithms based on convolutional neural networks,” Remote Sens., vol. 13, no. 22, p. 4712, Nov. 2021. doi: 10.3390/rs13224712
    [69]
    W. Li, L. Zhu, Y. Shi, K. Guo, and E. Cambria, “User reviews: Sentiment analysis using lexicon integrated two-channel CNN–LSTM family models,” Appl. Soft Comput., vol. 94, p. 106435, Sept. 2020. doi: 10.1016/j.asoc.2020.106435
    [70]
    S. Wang, J. Cao, and P. S. Yu, “Deep learning for spatio-temporal data mining: A survey,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 8, pp. 3681–3700, Aug. 2022. doi: 10.1109/TKDE.2020.3025580
    [71]
    S. Bhatore, L. Mohan, and Y. R. Reddy, “Machine learning techniques for credit risk evaluation: A systematic literature review,” J. Banking Financ. Technol., vol. 4, pp. 111–138, 2020. doi: 10.1007/s42786-020-00020-3
    [72]
    A. Hilmkil, S. Callh, M. Barbieri, L. R. Sütfeld, E. L. Zec, and O. Mogren, “Scaling federated learning for fine-tuning of large language models,” in Proc. 26th Int. Conf. Applications of Natural Language to Information Systems, Saarbrücken, Germany, 2021, pp. 15–23.
    [73]
    T. Lin, L. Kong, S. U. Stich, and M. Jaggi, “Ensemble distillation for robust model fusion in federated learning,” in Proc. 34th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, pp. 198.
    [74]
    D. Liu and T. Miller, “Federated pretraining and fine tuning of BERT using clinical notes from multiple silos,” arXiv preprint arXiv: 2002.08562, 2020.
    [75]
    P. Qi, D. Chiaro, A. Guzzo, M. Ianni, G. Fortino, and F. Piccialli, “Model aggregation techniques in federated learning: A comprehensive survey,” Future Gener. Comput. Syst., vol. 150, pp. 272–293, Jan. 2024. doi: 10.1016/j.future.2023.09.008
    [76]
    T. Lin, L. Kong, S. U. Stich, and M. Jaggi, “Ensemble distillation for robust model fusion in federated learning,” in Proc. 34th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, pp. 198.
    [77]
    X. Gong, A. Sharma, S. Karanam, Z. Wu, T. Chen, D. Doermann, and A. Innanje, “Preserving privacy in federated learning with ensemble cross-domain knowledge distillation,” in Proc. 36th AAAI Conf. Artificial Intelligence, 2022, pp. 11891–11899.
    [78]
    B. Soltani, Y. Zhou, V. Haghighi, and J. C. S. Lui, “A survey of federated evaluation in federated learning,” in Proc. 32nd Int. Joint Conf. Artificial Intelligence, Macao, China, 2023, pp. 758.
    [79]
    F. Lai, Y. Dai, S. S. V. Singapuram, J. Liu, X. Zhu, H. V. Madhyastha, and M. Chowdhury, “FedScale: Benchmarking model and system performance of federated learning at scale,” in Proc. 39th Int. Conf. Machine Learning, Baltimore, USA, 2022, pp. 11814–11827.
    [80]
    V. Siomos and J. Passerat-Palmbach, “Contribution evaluation in federated learning: Examining current approaches,” in Proc. 1st NeurIPS Workshop on New Frontiers in Federated Learning, 2021.
    [81]
    K. Cheng, T. Fan, Y. Jin, Y. Liu, T. Chen, D. Papadopoulos, and Q. Yang, “SecureBoost: A lossless federated learning framework,” IEEE Intell. Syst., vol. 36, no. 6, pp. 87–98, Nov.–Dec. 2021. doi: 10.1109/MIS.2021.3082561
    [82]
    G. Andrew, O. Thakkar, H. B. McMahan, and S. Ramaswamy, “Differentially private learning with adaptive clipping,” in Proc. 35th Conf. Neural Information Processing Systems, 2021, pp. 17455–17466.
    [83]
    D. L. Davies and D. W. Bouldin, “A cluster separation measure,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-1, no. 2, pp. 224–227, Apr. 1979. doi: 10.1109/TPAMI.1979.4766909
    [84]
    N. Pitropakis, E. Panaousis, T. Giannetsos, E. Anastasiadis, and G. Loukas, “A taxonomy and survey of attacks against machine learning,” Comput. Sci. Rev., vol. 34, p. 100199, Nov. 2019. doi: 10.1016/j.cosrev.2019.100199
    [85]
    H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J.-Y. Sohn, K. Lee, and D. Papailiopoulos, “Attack of the tails: Yes, you really can backdoor federated learning,” in Proc. 34th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, pp. 1348.
    [86]
    M. S. Jere, T. Farnan, and F. Koushanfar, “A taxonomy of attacks on federated learning,” IEEE Secur. Privacy, vol. 19, no. 2, pp. 20–28, Mar-Apr. 2021. doi: 10.1109/MSEC.2020.3039941
    [87]
    K. Ganju, Q. Wang, W. Yang, C. A. Gunter, and N. Borisov, “Property inference attacks on fully connected neural networks using permutation invariant representations,” in Proc. ACM SIGSAC Conf. Computer and Communications Security, Toronto, Canada, 2018, pp. 619–633.
    [88]
    A. Salem, A. Bhattacharya, M. Backes, M. Fritz, and Y. Zhang, “Updates-Leak: Data set inference and reconstruction attacks in online learning,” in Proc. 29th USENIX Security Symp., 2020, pp. 1291–1308.
    [89]
    D. Wu, S. Qi, Y. Qi, Q. Li, B. Cai, Q. Guo, and J. Cheng, “Understanding and defending against white-box membership inference attack in deep learning,” Knowl.-Based Syst., vol. 259, p. 110014, Jan. 2023. doi: 10.1016/j.knosys.2022.110014
    [90]
    L. Lyu, H. Yu, X. Ma, C. Chen, L. Sun, J. Zhao, Q. Yang, and P. S. Yu, “Privacy and robustness in federated learning: Attacks and defenses,” IEEE Trans. Neural Networks Learn. Syst., 2022. DOI: 10.1109/TNNLS.2022.3216981
    [91]
    Y. Liang, Y. Li, and B.-S. Shin, “Auditable federated learning with byzantine robustness,” IEEE Trans. Comput. Soc. Syst., 2023. DOI: 10.1109/TCSS.2023.3266019
    [92]
    K. Zhang, C. Keliris, T. Parisini, B. Jiang, and M. M. Polycarpou, “Passive attack detection for a class of stealthy intermittent integrity attacks,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 4, pp. 898–915, Apr. 2023. doi: 10.1109/JAS.2023.123177
    [93]
    S. P. Karimireddy, S. Kale, M. Mohri, S. J. Reddi, S. U. Stich, and A. T. Suresh, “SCAFFOLD: Stochastic controlled averaging for federated learning,” in Proc. 37th Int. Conf. Machine Learning, 2020, pp. 5132–5143.
    [94]
    Q. Li, Z. Wen, Z. Wu, S. Hu, N. Wang, Y. Li, X. Liu, and B. He, “A survey on federated learning systems: Vision, hype and reality for data privacy and protection,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 4, pp. 3347–3366, Apr. 2023. doi: 10.1109/TKDE.2021.3124599
    [95]
    A. Z. Tan, H. Yu, L. Cui, and Q. Yang, “Towards personalized federated learning,” IEEE Trans. Neural Networks Learn. Syst., vol. 34, no. 12, pp. 9587–9603, Dec. 2023. doi: 10.1109/TNNLS.2022.3160699
    [96]
    Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-ⅡD data,” arXiv preprint arXiv: 1806.00582, 2018.
    [97]
    M. Yang, X. Wang, H. Zhu, H. Wang, and H. Qian, “Federated learning with class imbalance reduction,” in Proc. 29th European Signal Processing Conf., Dublin, Ireland, 2021, pp. 2174–2178.
    [98]
    Q. Li, B. He, and D. Song, “Model-contrastive federated learning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Nashville, USA, 2021, pp. 10708–10717.
    [99]
    M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary, “Federated learning with personalization layers,” arXiv preprint arXiv: 1912.00818, 2019.
    [100]
    D. Li and J. Wang, “FedMD: Heterogenous federated learning via model distillation,” arXiv preprint arXiv: 1910.03581, 2019.
    [101]
    Y. Huang, L. Chu, Z. Zhou, L. Wang, J. Liu, J. Pei, and Y. Zhang, “Personalized cross-silo federated learning on non-ⅡD data,” in Proc. AAAI Conf. Artificial Intelligence, 2021, pp. 7865–7873.
    [102]
    H. Yang, H. He, W. Zhang, and X. Cao, “FedSteg: A federated transfer learning framework for secure image steganalysis,” IEEE Trans. Network Sci. Eng., vol. 8, no. 2, pp. 1084–1094, Apr.–Jun. 2021. doi: 10.1109/TNSE.2020.2996612
    [103]
    M. Alawad, H.-J. Yoon, S. Gao, B. Mumphrey, X.-C. Wu, E. B. Durbin, J. C. Jeong, I. Hands, D. Rust, L. Coyle, L. Penberthy, and G. Tourassi, “Privacy-preserving deep learning NLP models for cancer registries,” IEEE Trans. Emerging Topics in Computing, vol. 9, no. 3, pp. 1219–1230, Jul.–Sep. 2021. doi: 10.1109/TETC.2020.2983404
    [104]
    C. He, M. Annavaram, and S. Avestimehr, “Group knowledge transfer: Federated learning of large CNNs at the edge,” in Proc. 34th Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, pp. 14068–14080.
    [105]
    D. Berthelot, N. Carlini, I. Goodfellow, A. Oliver, N. Papernot, and C. Raffel, “MixMatch: A holistic approach to semi-supervised learning,” in Proc. 33rd Conf. Neural Information Processing Systems, Vancouver, Canada, 2019, pp. 5049–5059.
    [106]
    D.-H. Lee, “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,” in Proc. Workshop on Challenges in Representation Learning, Atlanta, USA, 2013, pp. 896.
    [107]
    W. Jeong, J. Yoon, E. Yang, and S. J. Hwang, “Federated semi-supervised learning with inter-client consistency & disjoint learning,” in Proc. 9th Int. Conf. Learning Representations, 2021, pp. 1–15.
    [108]
    S. Itahara, T. Nishio, Y. Koda, M. Morikura, and K. Yamamoto, “Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-ⅡD private data,” IEEE Trans. Mobile Comput., vol. 22, no. 1, pp. 191–205, Jan. 2023. doi: 10.1109/TMC.2021.3070013
    [109]
    C. Zhang, L. Zhu, D. Shi, J. Zheng, H. Chen, and B. Yu, “Semi-supervised feature selection with soft label learning,” IEEE/CAA J. Autom. Sinica, 2022. DOI: 10.1109/JAS.2022.106055
    [110]
    V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: A survey,” ACM Comput. Surv., vol. 41, no. 3, p. 15, Jul. 2009.
    [111]
    S. Agrawal, S. Sarkar, O. Aouedi, G. Yenduri, K. Piamrat, M. Alazab, S. Bhattacharya, P. K. R. Maddikunta, and T. R. Gadekallu, “Federated learning for intrusion detection system: Concepts, challenges and future directions,” Comput. Commun., vol. 195, pp. 346–361, Nov. 2022. doi: 10.1016/j.comcom.2022.09.012
    [112]
    M. Schreyer, T. Sattarov, and D. Borth, “Federated and privacy-preserving learning of accounting data in financial statement audits,” in Proc. 3rd ACM Int. Conf. AI in Finance, New York, USA, 2022, pp. 105–113.
    [113]
    A. Bemani and N. Björsell, “Aggregation strategy on federated machine learning algorithm for collaborative predictive maintenance,” Sensors, vol. 22, no. 16, p. 6252, Aug. 2022. doi: 10.3390/s22166252
    [114]
    C. Richards, S. Khemani, and F. Li, “Evaluation of various defense techniques against targeted poisoning attacks in federated learning,” in Proc. IEEE 19th Int. Conf. Mobile Ad Hoc Smart Syst., Denver, USA, 2022, pp. 693–698.
    [115]
    S. Shi, C. Hu, D. Wang, Y. Zhu, and Z. Han, “Federated anomaly analytics for local model poisoning attack,” IEEE J. Sel. Areas Commun., vol. 40, no. 2, pp. 596–610, Feb. 2022. doi: 10.1109/JSAC.2021.3118347
    [116]
    A. Rajput, “Natural language processing, sentiment analysis, and clinical analytics,” in Innovation in Health Informatics: A Smart Healthcare Primer, M. D. Lytras and A. Sarirete, Eds. Amsterdam, Netherlands: Elsevier, 2020, pp. 79–97.
    [117]
    B. Y. Lin, C. He, Z. Ze, H. Wang, Y. Hua, C. Dupuy, R. Gupta, M. Soltanolkotabi, X. Ren, and S. Avestimehr, “FedNLP: Benchmarking federated learning methods for natural language processing tasks,” in Findings of the Association for Computational Linguistics, Seattle, United States, 2022, pp. 157–175.
    [118]
    M. Liu, S. Ho, M. Wang, L. Gao, Y. Jin, and H. Zhang, “Federated learning meets natural language processing: A survey,” arXiv preprint arXiv: 2107.12603, 2021.
    [119]
    H. Yang, X. Li, and W. Pedrycz, “Learning from crowds with contrastive representation,” IEEE Access, vol. 11, pp. 40182–40191, Jan. 2023. doi: 10.1109/ACCESS.2023.3269751
    [120]
    C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, and Y. Gao, “A survey on federated learning,” Knowl.-Based Syst., vol. 216, p. 106775, Mar. 2021. doi: 10.1016/j.knosys.2021.106775
    [121]
    P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, R. G. L. DÓliveira, H. Eichner, S. El Rouayheb, D. Evans, J. Gardner, Z. Garrett, A. Gascón, B. Ghazi, P. B. Gibbons, M. Gruteser, Z. Harchaoui, C. He, L. He, Z. Huo, B. Hutchinson, J. Hsu, M. Jaggi, T. Javidi, G. Joshi, M. Khodak, J. Konecnỳ, A. Korolova, F. Koushanfar, S. Koyejo, T. Lepoint, Y. Liu, P. Mittal, M. Mohri, R. Nock, A. Özgür, R. Pagh, H. Qi, D. Ramage, R. Raskar, M. Raykova, D. Song, W. Song, S. U. Stich, Z. Sun, A. T. Suresh, F. Tramér, P. Vepakomma, J. Wang, L. Xiong, Z. Xu, Q. Yang, F. X. Yu, H. Yu, and S. Zhao, “Advances and open problems in federated learning,” Found. Trends ® Mach. Learn., vol. 14, no. 1–2, pp. 1–210, Jun. 2021.
    [122]
    K. Zhang, X. Song, C. Zhang, and S. Yu, “Challenges and future directions of secure federated learning: A survey,” Front. Comput. Sci., vol. 16, no. 5, p. 165817, Feb.–Dec. 2022. doi: 10.1007/s11704-021-0598-z
    [123]
    T. Zhang, L. Gao, C. He, M. Zhang, B. Krishnamachari, and A. S. Avestimehr, “Federated learning for the internet of things: Applications, challenges, and opportunities,” IEEE Internet Things Mag., vol. 5, no. 1, pp. 24–29, Mar. 2022. doi: 10.1109/IOTM.004.2100182
    [124]
    N. Rodríguez-Barroso, D. Jiménez-López, M. V. Luzón, F. Herrera, and E. Martínez-Cámara, “Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges,” Inf. Fusion, vol. 90, pp. 148–173, Feb. 2023. doi: 10.1016/j.inffus.2022.09.011
    [125]
    J. Wen, Z. Zhang, Y. Lan, Z. Cui, J. Cai, and W. Zhang, “A survey on federated learning: Challenges and applications,” Int. J. Mach. Learn. Cybern., vol. 14, no. 2, pp. 513–535, Feb. 2023. doi: 10.1007/s13042-022-01647-y
    [126]
    R. Al-Huthaifi, T. Li, W. Huang, J. Gu, and C. Li, “Federated learning in smart cities: Privacy and security survey,” Inf. Sci., vol. 632, pp. 833–857, Jun. 2023. doi: 10.1016/j.ins.2023.03.033
    [127]
    Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act) (Text with EEA relevance), PE/85/2021/REV/1, 2022.
    [128]
    Proposal for a Regulation of the European Parliament and of the Council Laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, COM/2021/206 final, 2021.
    [129]
    N. Díaz-Rodríguez, J. Del Ser, M. Coeckelbergh, M. López de Prado, E. Herrera-Viedma, and F. Herrera, “Connecting the dots in trustworthy artificial intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation,” Inf. Fusion, vol. 99, no. C, p. 101896, Nov. 2023.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(34)  / Tables(9)

    Article Metrics

    Article views (1477) PDF downloads(257) Cited by()

    Highlights

    • The aim is to provide a practical tutorial on FL for practitioners and researchers
    • It includes a short methodology and a systematic analysis of software frameworks
    • It provides cases of study including foundations of FL and implementation guidelines
    • It also explores the trends and lessons learned about FL

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return