2023, 10(2): 305-329.
doi: 10.1109/JAS.2022.106004
Abstract:
Transfer learning (TL) utilizes data or knowledge from one or more source domains to facilitate learning in a target domain. It is particularly useful when the target domain has very few or no labeled data, due to annotation expense, privacy concerns, etc. Unfortunately, the effectiveness of TL is not always guaranteed. Negative transfer (NT), i.e., leveraging source domain data/knowledge undesirably reduces learning performance in the target domain, and has been a long-standing and challenging problem in TL. Various approaches have been proposed in the literature to address this issue. However, there does not exist a systematic survey. This paper fills this gap, by first introducing the definition of NT and its causes, and reviewing over fifty representative approaches for overcoming NT, which fall into three categories: domain similarity estimation, safe transfer, and NT mitigation. Many areas, including computer vision, bioinformatics, natural language processing, recommender systems, and robotics, that use NT mitigation strategies to facilitate positive transfers, are also reviewed. Finally, we give guidelines on NT task construction and baseline algorithms, benchmark existing TL and NT mitigation approaches on three NT-specific datasets, and point out challenges and future research directions. To ensure reproducibility, our code is publicized at https://github.com/chamwen/NT-Benchmark.
W. Zhang, L. F. Deng, L. Zhang, and D. R. Wu, “A survey on negative transfer,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 2, pp. 305–329, Feb. 2023. doi: 10.1109/JAS.2022.106004.