Citation: | W. Zhou, X. Zhu, Q.-L. Han, L. Li, X. Chen, S. Wen, and Y. Xiang, “The security of using large language models: A survey with emphasis on ChatGPT,” IEEE/CAA J. Autom. Sinica, 2024. |
[1] |
Q. Miao, W. Zheng, Y. LV, M. Huang, W. Ding, and F.-Y. Wang, “Dao to hanoi via desci: Ai paradigm shifts from alphago to chatgpt,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 4, pp. 877–897, 2023. doi: 10.1109/JAS.2023.123561
|
[2] |
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. doi: 10.1038/nature14539
|
[3] |
T. Wu, S. He, J. Liu, S. Sun, K. Liu, Q.-L. Han, and Y. Tang, “A brief overview of chatgpt: The history, status quo and potential future development,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 5, pp. 1122–1136, 2023. doi: 10.1109/JAS.2023.123618
|
[4] |
OpenAI, “Introducing chatgpt.” [Online]. Available: https://openai.c om/blog/chatgpt
|
[5] |
W. D. Heaven, “Openai’s new language generator gpt-3 is shockingly good—and completely mindless,” July 2020. [Online]. Available: https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/
|
[6] |
S. Biswas, “The function of chat gpt in social media: According to chat gpt,” SSRN, 2023.
|
[7] |
H. Hassani and E. S. Silva, “The role of chatgpt in data science: how ai-assisted conversational interfaces are revolutionizing the field,” Big data and cognitive computing, vol. 7, no. 2, p. 62, 2023. doi: 10.3390/bdcc7020062
|
[8] |
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhari-wal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 33. Curran Associates, Inc., 2020, pp. 1877–1901.
|
[9] |
F.-Y. Wang, Q. Miao, X. Li, X. Wang, and Y. Lin, “What does chatgpt say: The dao from algorithmic intelligence to linguistic intelligence,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 3, pp. 575–579, 2023. doi: 10.1109/JAS.2023.123486
|
[10] |
C. Guo, Y. Lu, Y. Dou, and F.-Y. Wang, “Can chatgpt boost artistic cre-ation: The need of imaginative intelligence for parallel art,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 4, pp. 835–838, 2023. doi: 10.1109/JAS.2023.123555
|
[11] |
D. Baidoo-Anu and L. Owusu Ansah, “Education in the era of gener-ative artificial intelligence (ai): Understanding the potential benefits of chatgpt in promoting teaching and learning,” SSRN, 2023.
|
[12] |
B. D. Lund and T. Wang, “Chatting about chatgpt: how may ai and gpt impact academia and libraries?,” Library Hi Tech News, vol. 40, no. 3, pp. 26–29, 2023. doi: 10.1108/LHTN-01-2023-0009
|
[13] |
S. Sok and K. Heng, “Chatgpt for education and research: A review of benefits and risks,” SSRN, 2023.
|
[14] |
D. Kalla and N. Smith, “Study and analysis of chat gpt and its impact on different fields of study,” International Journal of Innovative Science and Research Technology, vol. 8, no. 3, 2023.
|
[15] |
X. Xue, X. Yu, and F.-Y. Wang, “Chatgpt chats on computational experiments: From interactive intelligence to imaginative intelligence for design of artificial societies and optimization of foundational models,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 6, pp. 1357–1360, 2023. doi: 10.1109/JAS.2023.123585
|
[16] |
F.-Y. Wang, J. Yang, X. Wang, J. Li, and Q.-L. Han, “Chat with chatgpt on industry 5.0: Learning and decision-making for intelligent industries,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 4, pp. 831–834, 2023. doi: 10.1109/JAS.2023.123552
|
[17] |
D. M. Korngiebel and S. D. Mooney, “Considering the possibilities and pitfalls of generative pre-trained transformer 3 (gpt-3) in healthcare delivery,” npj Digital Medicine, vol. 4, no. 1, p. 93, 2021. doi: 10.1038/s41746-021-00464-x
|
[18] |
M. Sallam, “Chatgpt utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns,” Healthcare, vol. 11, no. 6, p. 887, 2023. doi: 10.3390/healthcare11060887
|
[19] |
S. S. Biswas, “Role of chat gpt in public health,” Annals of biomedical engineering, vol. 51, no. 5, pp. 868–869, 2023. doi: 10.1007/s10439-023-03172-7
|
[20] |
M. Sallam, N. Salim, M. Barakat, and A. Al-Tammemi, “Chatgpt applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations,” Narra J, vol. 3, no. 1, pp. e103–e103, 2023. doi: 10.52225/narra.v3i1.103
|
[21] |
M. Sallam, “The utility of chatgpt as an example of large language models in healthcare education, research and practice: Systematic review on the future perspectives and potential limitations,” medRxiv, pp. 1–2, 2023.
|
[22] |
M. Dowling and B. Lucey, “Chatgpt for (finance) research: The bananarama conjecture,” Finance Research Letters, vol. 53, p. 103662, 2023. doi: 10.1016/j.frl.2023.103662
|
[23] |
P. Rivas and L. Zhao, “Marketing with chatgpt: Navigating the ethical terrain of gpt-based chatbot technology,” AI, vol. 4, no. 2, pp. 375–384, 2023. doi: 10.3390/ai4020019
|
[24] |
G. F. Frederico, “Chatgpt in supply chains: Initial evidence of appli-cations and potential research agenda,” Logistics, vol. 7, no. 2, p. 26, 2023. doi: 10.3390/logistics7020026
|
[25] |
I. Carvalho and S. Ivanov, “Chatgpt for tourism: applications, benefits and risks,” Tourism Review, 2023.
|
[26] |
B. Ahmad, S. Thakur, B. Tan, R. Karri, and H. Pearce, “Fixing hardware security bugs with large language models,” arXiv preprint arXiv: 2302.01215, 2023.
|
[27] |
C. S. Xia and L. Zhang, “Conversational automated program repair,” arXiv preprint arXiv: 2301.13246, 2023.
|
[28] |
D. Sobania, M. Briesch, C. Hanna, and J. Petke, “An analysis of the automatic bug fixing performance of chatgpt,” in Proceedings of the 45th International Conference on Software Engineering (ICSE ’23), 2023, pp. 1–8.
|
[29] |
M. Nair, R. Sadhukhan, and D. Mukhopadhyay, “Generating secure hardware using chatgpt resistant to cwes,” Cryptology ePrint Archive, Paper 2023/212, 2023.
|
[30] |
N. M. S. Surameery and M. Y. Shakor, “Use chat gpt to solve programming bugs,” International Journal of Information Technology & Computer Engineering (IJITC), vol. 3, no. 01, pp. 17–22, 2023.
|
[31] |
T. Team, “What happened in the chatgpt data breach?” [Online]. Available: https://www.twingate.com/blog/tips/chatgpt-data-breach
|
[32] |
A. Mudaliar, “Samsung bans chatgpt for staff, microsoft hints potential alternative.” [Online]. Available: https://www.spiceworks.com/tech/artificial-intelligence/news/samsung-bans-chatgpt-for-staff/
|
[33] |
B. Guembe, A. Azeta, S. Misra, V. C. Osamor, L. Fernandez-Sanz, and V. Pospelova, “The emerging threat of ai-driven cyber attacks: A review,” Applied Artificial Intelligence, vol. 36, no. 1, p. 2037254, 2022. doi: 10.1080/08839514.2022.2037254
|
[34] |
Y. Himeur, S. S. Sohail, F. Bensaali, A. Amira, and M. Alazab, “Latest trends of security and privacy in recommender systems: a comprehensive review and future perspectives,” Computers & Security, vol. 118, p. 102746, 2022.
|
[35] |
A. Mascellino, “New research exposes security risks in chatgpt plugins.” [Online]. Available: https://www.infosecurity-magazine.com/news/security-risks-chatgpt-plugins
|
[36] |
S. L. Blodgett, S. Barocas, H. Daumé Ⅲ, and H. Wallach, “Language (technology) is power: A critical survey of “bias” in NLP,” in Proceed-ings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Jul. 2020, pp. 5454–5476.
|
[37] |
E. Sheng, K.-W. Chang, P. Natarajan, and N. Peng, “Societal biases in language generation: Progress and challenges,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Aug. 2021, pp. 4275–4293.
|
[38] |
M. Hasal, J. Nowaková, K. Ahmed Saghair, H. Abdulla, V. Snášel, and L. Ogiela, “Chatbots: Security, privacy, data protection, and social aspects,” Concurrency and Computation: Practice and Experience, vol. 33, no. 19, p. e6426, 2021. doi: 10.1002/cpe.6426
|
[39] |
J. Pu, Z. Sarwar, S. M. Abdullah, A. Rehman, Y. Kim, P. Bhattacharya, M. Javed, and B. Viswanath, “Deepfake text detection: Limitations and opportunities,” in Proceedings of the IEEE Symposium on Security and Privacy 2023. IEEE Computer Society, 2023, pp. 19–36.
|
[40] |
L. Weidinger, J. Uesato, M. Rauh, C. Griffin, P.-S. Huang, J. Mellor, A. Glaese, M. Cheng, B. Balle, A. Kasirzadeh, C. Biles, S. Brown, Z. Kenton, W. Hawkins, T. Stepleton, A. Birhane, L. A. Hendricks, L. Rimell, W. Isaac, J. Haas, S. Legassick, G. Irving, and I. Gabriel, “Taxonomy of risks posed by language models,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22). Association for Computing Machinery, 2022, pp. 214– 229.
|
[41] |
E. Kasneci, K. Sessler, S. Küchemann, M. Bannert, D. Dementieva, F. Fischer, U. Gasser, G. Groh, S. Günnemann, E. Hüllermeier, S. Kr-usche, G. Kutyniok, T. Michaeli, C. Nerdel, J. Pfeffer, O. Poquet, M. Sailer, A. Schmidt, T. Seidel, M. Stadler, J. Weller, J. Kuhn, and G. Kasneci, “Chatgpt for good? on opportunities and challenges of large language models for education,” Learning and Individual Differences, vol. 103, p. 102274, 2023. doi: 10.1016/j.lindif.2023.102274
|
[42] |
T. Y. Zhuo, Y. Huang, C. Chen, and Z. Xing, “Red teaming chatgpt via jailbreaking: Bias, robustness, reliability and toxicity,” arXiv preprint arXiv: 2301.12867, 2023.
|
[43] |
P. P. Ray, “Chatgpt: A comprehensive review on background, appli-cations, key challenges, bias, ethics, limitations and future scope,” Internet of Things and Cyber-Physical Systems, vol. 3, pp. 121–154, 2023. doi: 10.1016/j.iotcps.2023.04.003
|
[44] |
S. Kumar, V. Balachandran, L. Njoo, A. Anastasopoulos, and Y. Tsvetkov, “Language generation models can cause harm: So what can we do about it? an actionable survey,” in Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023, pp. 3299–3321.
|
[45] |
W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J.-Y. Nie, and J.-R. Wen, “A survey of large language models,” arXiv preprint arXiv: 2303.18223, 2023.
|
[46] |
J. Deng, H. Sun, Z. Zhang, J. Cheng, and M. Huang, “Recent advances towards safe, responsible, and moral dialogue systems: A survey,” arXiv preprint arXiv: 2302.09270, 2023.
|
[47] |
C. Dilmegani, “Large language model training in 2023.” [Online]. Available: https://research.aimultiple.com/large-language-model-training/
|
[48] |
OpenAI, “Developing safe & responsible ai.” [Online]. Available: https://openai.com/safety
|
[49] |
C. Yeo and A. Chen, “Defining and evaluating fair natural language generation,” in Proceedings of the Fourth Widening Natural Language Processing Workshop. Seattle, USA: Association for Computational Linguistics, Jul. 2020, pp. 107–109.
|
[50] |
L. Lucy and D. Bamman, “Gender and representation bias in GPT-3 generated stories,” in Proceedings of the Third Workshop on Narrative Understanding. Virtual: Association for Computational Linguistics, Jun. 2021, pp. 48–55.
|
[51] |
J. Shihadeh, M. Ackerman, A. Troske, N. Lawson, and E. Gonzalez, “Brilliance bias in gpt-3,” in Proceedings of the IEEE Global Human-itarian Technology Conference 2022, 2022, pp. 62–69.
|
[52] |
D. M. Kaplan, R. Palitsky, S. J. Arconada Alvarez, N. S. Pozzo, M. N. Greenleaf, C. A. Atkinson, and W. A. Lam, “What’s in a name? experimental evidence of gender bias in recommendation letters generated by chatgpt,” Journal of Medical Internet Research, vol. 26, p. e51837, 2024. doi: 10.2196/51837
|
[53] |
W. Guo and A. Caliskan, “Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21). New York, NY, USA: Association for Computing Machinery, 2021, pp. 122–133.
|
[54] |
M. Nadeem, A. Bethke, and S. Reddy, “StereoSet: Measuring stereo-typical bias in pretrained language models,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Process-ing (Volume 1: Long Papers). Online: Association for Computational Linguistics, Aug. 2021, pp. 5356–5371.
|
[55] |
C. Logé, E. Ross, D. Y. A. Dadey, S. Jain, A. Saporta, A. Y. Ng, and P. Rajpurkar, “Q-pain: A question answering dataset to measure social bias in pain management,” in Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021, pp. 1– 12.
|
[56] |
K. S. Amin, H. P. Forman, and M. A. Davis, “Even with chatgpt, race matters,” Clinical Imaging, vol. 109, pp. 110–113, 2024.
|
[57] |
E. Sheng, K.-W. Chang, P. Natarajan, and N. Peng, “The woman worked as a babysitter: On biases in language generation,” in Pro-ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019, pp. 3407–3412.
|
[58] |
A. Zheng, “Dissecting bias of chatgpt in college major recommenda-tions,” Information Technology and Management, pp. 1–12, 2024.
|
[59] |
L. Lippens, “Computer says ‘no’: Exploring systemic bias in chatgpt using an audit approach,” Computers in Human Behavior: Artificial Humans, vol. 2, no. 1, p. 100054, 2024. doi: 10.1016/j.chbah.2024.100054
|
[60] |
A. Abid, M. Farooqi, and J. Zou, “Persistent anti-muslim bias in large language models,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21). New York, NY, USA: Association for Computing Machinery, 2021, pp. 298–306.
|
[61] |
——, “Large language models associate muslims with violence,” Nature Machine Intelligence, vol. 3, no. 6, pp. 461–463, 2021. doi: 10.1038/s42256-021-00359-2
|
[62] |
A. A. Amin and K. S. Kabir, “A disability lens towards biases in gpt-3 generated open-ended languages,” arXiv preprint arXiv: 2206.11993, 2022.
|
[63] |
L. Gover, “Political bias in large language models,” The Commons: Puget Sound Journal of Politics, vol. 4, no. 1, p. 2, 2023.
|
[64] |
P. Pit, X. Ma, M. Conway, Q. Chen, J. Bailey, H. Pit, P. Keo, W. Diep, and Y.-G. Jiang, “Whose side are you on? investigating the political stance of large language models,” arXiv preprint arXiv: 2403.13840, 2024.
|
[65] |
Y. Bang, D. Chen, N. Lee, and P. Fung, “Measuring political bias in large language models: What is said and how it is said,” arXiv preprint arXiv: 2403.18932, 2024.
|
[66] |
J. Hartmann, J. Schwenzow, and M. Witte, “The political ideology of conversational ai: Converging evidence on chatgpt’s pro-environmental, left-libertarian orientation,” arXiv preprint arXiv: 2301.01768, 2023.
|
[67] |
D. Rozado, “The political biases of chatgpt,” Social Sciences, vol. 12, no. 3, pp. 1–8, 2023.
|
[68] |
R. W. McGee, “Is chat gpt biased against conservatives? an empirical study,” Tech. Rep., 2023.
|
[69] |
F. Motoki, V. Pinho Neto, and V. Rodrigues, “More human than human: measuring chatgpt political bias,” Public Choice, vol. 198, no. 1, pp. 3–23, 2024.
|
[70] |
N. Retzlaff, “Political biases of chatgpt in different languages,” 2024.
|
[71] |
J. Rutinowski, S. Franke, J. Endendyk, I. Dormuth, M. Roidl, and M. Pauly, “The self-perception and political biases of chatgpt,” Human Behavior and Emerging Technologies, vol. 2024, no. 1, p. 7115633, 2024.
|
[72] |
S. Gehman, S. Gururangan, M. Sap, Y. Choi, and N. A. Smith, “Re-alToxicityPrompts: Evaluating neural toxic degeneration in language models,” in Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, Nov 2020, pp. 3356–3369.
|
[73] |
N. Ousidhoum, X. Zhao, T. Fang, Y. Song, and D.-Y. Yeung, “Probing toxic content in large pre-trained language models,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Online: Association for Computational Linguistics, Aug. 2021, pp. 4262–4274.
|
[74] |
D. Nozza, F. Bianchi, and D. Hovy, “HONEST: Measuring hurtful sentence completion in language models,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Jun 2021, pp. 2398–2406.
|
[75] |
P. Schramowski, C. Turan, N. Andersen, C. A. Rothkopf, and K. Ker-sting, “Large pre-trained language models contain human-like biases of what is right and wrong to do,” Nature Machine Intelligence, vol. 4, no. 3, pp. 258–268, 2022. doi: 10.1038/s42256-022-00458-8
|
[76] |
T. Hartvigsen, S. Gabriel, H. Palangi, M. Sap, D. Ray, and E. Kamar, “ToxiGen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Dublin, Ireland: Association for Computational Linguistics, May 2022, pp. 3309–3326.
|
[77] |
F. Huang, H. Kwak, and J. An, “Is chatgpt better than human anno-tators? potential and limitations of chatgpt in explaining implicit hate speech,” in Proceedings of the ACM Web Conference 2023 (WWW ’23 Companion), 2023, pp. 294–297.
|
[78] |
P.-S. Huang, H. Zhang, R. Jiang, R. Stanforth, J. Welbl, J. Rae, V. Maini, D. Yogatama, and P. Kohli, “Reducing sentiment bias in language models via counterfactual evaluation,” in Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp. 65–83.
|
[79] |
E. Sheng, K.-W. Chang, P. Natarajan, and N. Peng, “Towards control-lable biases in language generation,” in Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computa-tional Linguistics, 2020, pp. 3239–3254.
|
[80] |
P. P. Liang, C. Wu, L.-P. Morency, and R. Salakhutdinov, “Towards understanding and mitigating social biases in language models,” in Proceedings of the 38th International Conference on Machine Learning, M. Meila and T. Zhang, Eds. PMLR, Jul 2021, pp. 6565–6576.
|
[81] |
C. Borchers, D. S. Gala, B. Gilburt, E. Oravkin, W. Bounsi, Y. M. Asano, and H. R. Kirk, “Looking for a handsome carpenter! debiasing gpt-3 job advertisements,” in Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing at NAACL 2022, Jan 2022, pp. 212–224.
|
[82] |
R. Liu, C. Jia, J. Wei, G. Xu, L. Wang, and S. Vosoughi, “Mitigating political bias in language models through reinforced calibration,” in proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021, pp. 1–10.
|
[83] |
R. Liu, C. Jia, J. Wei, G. Xu, and S. Vosoughi, “Quantifying and alleviating political bias in language models,” Artificial Intelligence, vol. 304, p. 103654, 2022. doi: 10.1016/j.artint.2021.103654
|
[84] |
J. Welbl, A. Glaese, J. Uesato, S. Dathathri, J. Mellor, L. A. Hendricks, K. Anderson, P. Kohli, B. Coppin, and P.-S. Huang, “Challenges in detoxifying language models,” in Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican Republic: Association for Computational Linguistics, Nov. 2021, pp. 2447–2469.
|
[85] |
A. Xu, E. Pathak, E. Wallace, S. Gururangan, M. Sap, and D. Klein, “Detoxifying language models risks marginalizing minority voices,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jun 2021, pp. 2390–2397.
|
[86] |
N. Inie, J. Falk Olesen, and L. Derczynski, “The rumour mill: Making the spread of misinformation explicit and tangible,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20). Association for Computing Machinery, 2020, pp. 1–4.
|
[87] |
S. Kreps, R. M. McCain, and M. Brundage, “All the news that’s fit to fabricate: Ai-generated text as a tool of media misinformation,” Journal of Experimental Political Science, vol. 9, no. 1, pp. 104–117, 2022. doi: 10.1017/XPS.2020.37
|
[88] |
G. Spitale, N. Biller-Andorno, and F. Germani, “Ai model gpt-3 (dis)informs us better than humans,” SCIENCE ADVANCES, vol. 9, no. 26, pp. 1–9, 2023.
|
[89] |
L. Haoyu, et al, “The possibility and optimization path of chatgpt promoting the generation and dissemination of fake news,” Media and Communication Research, vol. 5, no. 2, pp. 80–86, 2024.
|
[90] |
P. Ranade, A. Piplai, S. Mittal, A. Joshi, and T. Finin, “Generating fake cyber threat intelligence using transformer-based models,” in Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), 2021, pp. 1–9.
|
[91] |
J. Mink, L. Luo, N. M. Barbosa, O. Figueira, Y. Wang, and G. Wang, “DeepPhish: Understanding user trust towards artificially generated profiles in online social networks,” in Proceedings of the 31st USENIX Security Symposium (USENIX Security 22), Aug. 2022, pp. 1669–1686.
|
[92] |
Y. Hu, Y. Lin, E. Skorupa Parolin, L. Khan, and K. Hamlen, “Control-lable fake document infilling for cyber deception,” in Findings of the Association for Computational Linguistics: EMNLP 2022. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, Dec. 2022, pp. 6505–6519.
|
[93] |
D. Barman, Z. Guo, and O. Conlan, “The dark side of language models: Exploring the potential of llms in multimedia disinformation generation and dissemination,” Machine Learning with Applications, p. 100545, 2024.
|
[94] |
R. Zellers, A. Holtzman, H. Rashkin, Y. Bisk, A. Farhadi, F. Roesner, and Y. Choi, “Defending against neural fake news,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc., 2019.
|
[95] |
H. Stiff and F. Johansson, “Detecting computer-generated disinforma-tion,” International Journal of Data Science and Analytics, vol. 13, no. 4, pp. 363–383, 2022. doi: 10.1007/s41060-021-00299-5
|
[96] |
S. Rossi, Y. Kwon, O. H. Auglend, R. R. Mukkamala, M. Rossi, and J. Thatcher, “Are deep learning-generated social media profiles indistinguishable from real profiles?” arXiv preprint arXiv: 2209.07214, 2022.
|
[97] |
A. Gupta, A. Singhal, A. Mahajan, A. Jolly, and S. Kumar, “Empirical framework for automatic detection of neural and human authored fake news,” in Proceedings of the 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS), 2022, pp. 1625–1633.
|
[98] |
A. Pagnoni, M. Graciarena, and Y. Tsvetkov, “Threat scenarios and best practices to detect neural fake news,” in Proceedings of the 29th International Conference on Computational Linguistics. Gyeongju, Republic of Korea: International Committee on Computational Lin-guistics, Oct. 2022, pp. 1233–1249.
|
[99] |
M. Gambini, T. Fagni, F. Falchi, and M. Tesconi, “On pushing deepfake tweet detection capabilities to the limits,” in Proceedings of the 14th ACM Web Science Conference 2022 (WebSci ’22). New York, NY, USA: Association for Computing Machinery, 2022, pp. 154–163.
|
[100] |
B. Jiang, Z. Tan, A. Nirmal, and H. Liu, “Disinformation detection: An evolving challenge in the age of llms,” in Proceedings of the 2024 SIAM International Conference on Data Mining (SDM). SIAM, 2024, pp. 427–435.
|
[101] |
Y. Huang, K. Shu, P. S. Yu, and L. Sun, “From creation to clarification: Chatgpt’s journey through the fake news quagmire,” in Companion Proceedings of the ACM on Web Conference 2024, 2024, pp. 513–516.
|
[102] |
S. B. Shah, S. Thapa, A. Acharya, K. Rauniyar, S. Poudel, S. Jain, A. Masood, and U. Naseem, “Navigating the web of disinformation and misinformation: Large language models as double-edged swords,” IEEE Access, 2024.
|
[103] |
Y. Tukmacheva, I. Oseledets, and E. Frolov, “Mitigating human and computer opinion fraud via contrastive learning,” arXiv preprint arXiv: 2301.03025, 2023.
|
[104] |
A. Gambetti and Q. Han, “Combat ai with ai: Counteract machine-generated fake restaurant reviews on social media,” arXiv preprint arXiv: 2302.07731, 2023.
|
[105] |
P. Henderson, K. Sinha, N. Angelard-Gontier, N. R. Ke, G. Fried, R. Lowe, and J. Pineau, “Ethical challenges in data-driven dialogue systems,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’18). New York, NY, USA: Association for Computing Machinery, 2018, pp. 123–129.
|
[106] |
K. McGuffie and A. Newhouse, “The radicalization risks of gpt-3 and advanced neural language models,” arXiv preprint arXiv: 2009.06807, 2020.
|
[107] |
T. Y. Zhuo, Y. Huang, C. Chen, and Z. Xing, “Exploring ai ethics of chatgpt: A diagnostic analysis,” arXiv preprint arXiv: 2301.12867, 2023.
|
[108] |
A. Borji, “A categorical archive of chatgpt failures,” arXiv preprint arXiv: 2302.03494, 2023.
|
[109] |
A. Rasekh and I. Eisenberg, “Democratizing ethical assessment of natural language generation models,” arXiv preprint arXiv: 2207.10576, 2022.
|
[110] |
A. Chan, “Gpt-3 and instructgpt: technological dystopianism, utopi-anism, and “contextual” perspectives in ai ethics and industry,” AI and Ethics, vol. 3, no. 1, pp. 53–64, 2023. doi: 10.1007/s43681-022-00148-6
|
[111] |
J. Chatterjee and N. Dethlefs, “This new conversational ai model can be your friend, philosopher, and guide.. and even your worst enemy,” Patterns, vol. 4, no. 1, pp. 1–3, 2023.
|
[112] |
R. Karanjai, “Targeted phishing campaigns using large scale language models,” arXiv preprint arXiv: 2301.00665, 2023.
|
[113] |
H. Khan, M. Alam, S. Al-Kuwari, and Y. Faheem, “Offensive ai: Unification of email generation through gpt-2 model with a gametheoretic approach for spear-phishing attacks,” in Proceedings of the Competitive Advantage in the Digital Economy (CADE 2021), vol. 2021, 2021, pp. 178–184.
|
[114] |
A. M. Shibli, M. M. A. Pritom, and M. Gupta, “Abusegpt: Abuse of generative ai chatbots to create smishing campaigns,” in 2024 12th International Symposium on Digital Forensics and Security (ISDFS). IEEE, 2024, pp. 1–6.
|
[115] |
P. V. Falade, “Deciphering chatgpt’s impact: Exploring its role in cybercrime and cybersecurity,” Int. J. Sci. Res. in Computer Science and Engineering, vol. 12, no. 2, 2024.
|
[116] |
M. Alawida, B. Abu Shawar, O. I. Abiodun, A. Mehmood, A. E. Omolara, and A. K. Al Hwaitat, “Unveiling the dark side of chatgpt: Exploring cyberattacks and enhancing user awareness,” Information, vol. 15, no. 1, p. 27, 2024. doi: 10.3390/info15010027
|
[117] |
L. Alotaibi, S. Seher, and N. Mohammad, “Cyberattacks using chatgpt: Exploring malicious content generation through prompt engineering,” in 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS). IEEE, 2024, pp. 1304–1311.
|
[118] |
T. Susnjak, “Chatgpt: The end of online exam integrity?” arXiv preprint arXiv: 2212.09292, 2022.
|
[119] |
C. A. Gao, F. M. Howard, N. S. Markov, E. C. Dyer, S. Ramesh, Y. Luo, and A. T. Pearson, “Comparing scientific abstracts generated by chatgpt to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers,” npj Digit. Med., pp. 1–5, 2023.
|
[120] |
M. J. Israel and A. Amer, “Rethinking data infrastructure and its ethical implications in the face of automated digital content generation,” AI and Ethics, 2022.
|
[121] |
S. Jalil, S. Rafi, T. D. LaToza, K. Moran, and W. Lam, “Chatgpt and software testing education: Promises &; perils,” in Proceedings of IEEE International Conference on Software Testing, Verification and Validation Workshops 2023 (ICSTW), 2023, pp. 4130–4137.
|
[122] |
A. B. Armstrong, “Who’s afraid of chatgpt? an examination of chatgpt’s implications for legal writing,” Tech. Rep., 2023.
|
[123] |
D. R. Cotton, P. A. Cotton, and J. R. Shipway, “Chatting and cheating. ensuring academic integrity in the era of chatgpt,” Innovations in Education and Teaching International, pp. 1–12, 2023.
|
[124] |
R. J. M. Ventayen, “Openai chatgpt generated results: Similarity index of artificial intelligence-based contents,” Tech. Rep., 2023.
|
[125] |
M. Khalil and E. Er, “Will chatgpt get you caught? rethinking of plagiarism detection,” arXiv preprint arXiv: 2302.04335, 2023.
|
[126] |
M. Rezaei, H. Salehi, and O. Tabatabaei, “Uses and misuses of chatgpt as an ai-language model in academic writing,” in 2024 10th International Conference on Artificial Intelligence and Robotics (QICAR). IEEE, 2024, pp. 256–260.
|
[127] |
R. Mustapha, S. N. A. M. Mustapha, and F. W. A. Mustapha, “Stu-dents’ misuse of chatgpt in higher education: An application of the fraud triangle theory,” Journal of Contemporary Social Science and Education Studies (JOCSSES) E-ISSN-2785-8774, vol. 4, no. 1, pp. 87–97, 2024.
|
[128] |
M. M. Van Wyk, “Is chatgpt an opportunity or a threat? preventive strategies employed by academics related to a genai-based llm at a faculty of education,” Journal of applied learning and teaching, vol. 7, no. 1, 2024.
|
[129] |
M. P. Rogers, H. M. Hillberg, and C. L. Groves, “Attitudes towards the use (and misuse) of chatgpt: A preliminary study,” in Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1, 2024, pp. 1147–1153.
|
[130] |
N. M. Mbwambo and P. B. Kaaya, “Chatgpt in education: Applications, concerns and recommendations,” Journal of ICT Systems, vol. 2, no. 1, pp. 107–124, 2024. doi: 10.56279/jicts.v2i1.87
|
[131] |
G. Kendall and J. A. T. da Silva, “Risks of abuse of large language models, like chatgpt, in scientific publishing: Authorship, predatory publishing, and paper mills.,” Learn. Publ., vol. 37, no. 1, pp. 55–62, 2024. doi: 10.1002/leap.1578
|
[132] |
M. Dowling and B. Lucey, “Chatgpt for (finance) research: The bananarama conjecture,” Finance Research Letters, vol. 53, p. 103662, 2023. doi: 10.1016/j.frl.2023.103662
|
[133] |
H. H. Thorp, “Chatgpt is fun, but not an author,” Science, vol. 379, no. 6630, pp. 313–313, 2023. doi: 10.1126/science.adg7879
|
[134] |
M. Liebrenz, R. Schleifer, A. Buadze, D. Bhugra, and A. Smith, “Generating scholarly content with chatgpt: ethical challenges for medical publishing,” The Lancet Digital Health, vol. 5, no. 3, pp. e105–e106, 2023. doi: 10.1016/S2589-7500(23)00019-5
|
[135] |
F. M. Megahed, Y.-J. Chen, J. A. Ferris, S. Knoth, and L. A. Jones-Farmer, “How generative ai models such as chatgpt can be (mis)used in spc practice, education, and research? an exploratory study,” Quality Engineering, pp. 1–29, 2023.
|
[136] |
J. Albrecht, E. Kitanidis, and A. J. Fetterman, “Despite” super-human” performance, current llms are unsuited for decisions about ethics and safety,” in Proceedings of the NeurIPS ML Safety Workshop, 2022.
|
[137] |
S. Krügel, A. Ostermaier, and M. Uhl, “The moral authority of chatgpt,” arXiv preprint arXiv: 2301.07098, 2023.
|
[138] |
M. Jakesch, A. Bhat, D. Buschek, L. Zalmanson, and M. Naaman, “Co-writing with opinionated language models affects users’ views,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), 2023, pp. 1–22.
|
[139] |
T. L. D. Health, “Chatgpt: friend or foe?,” The Lancet Digital Health, vol. 5, no. 3, pp. –e102, Mar. 2023.
|
[140] |
H. Zohny, J. McMillan, and M. King, “Ethics of generative ai,” Journal of Medical Ethics, vol. 49, no. 2, pp. 79–80, 2023. doi: 10.1136/jme-2023-108909
|
[141] |
W. Chen, F. Wang, and M. Edwards, “Active countermeasures for email fraud,” in Proceedings of the 8th IEEE European Symposium on Security and Privacy, 2023, pp. 39–55.
|
[142] |
J. Hewett and M. Leeke, “Developing a gpt-3-based automated victim for advance fee fraud disruption,” in Proceedings of the 2022 IEEE 27th Pacific Rim International Symposium on Dependable Computing (PRDC), 2022, pp. 205–211.
|
[143] |
P. Hacker, A. Engel, and M. Mauer, “Regulating chatgpt and other large generative ai models,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23), 2023, pp. 1112–1123.
|
[144] |
M. Y. Vardi, “Who is responsible around here?,” Communications of the ACM, vol. 66, no. 3, p. 5, 2023. doi: 10.1145/3580584
|
[145] |
O. Oviedo-Trespalacios, A. E. Peden, T. Cole-Hunter, A. Costantini, M. Haghani, J. Rod, S. Kelly, H. Torkamaan, A. Tariq, J. D. A. Newton, T. Gallagher, S. Steinert, A. Filtness, T. R. o. U. C. t. O. C. S.-R. I. Reniers, Genserik, and Advice, “The risks of using chatgpt to obtain common safety-related information and advice,” Tech. Rep., 2023.
|
[146] |
E. Wallace, S. Feng, N. Kandpal, M. Gardner, and S. Singh, “Universal adversarial triggers for attacking and analyzing nlp,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China: Association for Computational Linguistics, 2019, pp. 2153–2162.
|
[147] |
H. S. Heidenreich and J. R. Williams, “The earth is flat and the sun is not a star: The susceptibility of gpt-2 to universal adversarial triggers,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21). New York, NY, USA: Association for Computing Machinery, 2021, pp. 566–573.
|
[148] |
F. Perez and I. Ribeiro, “Ignore previous prompt: Attack techniques for language models,” in Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022, pp. 1–21.
|
[149] |
K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz, “More than you’ve asked for: A comprehensive analysis of novel prompt injection threats to application-integrated large language models,” arXiv preprint arXiv: 2302.12173, 2023.
|
[150] |
Y. Liu, G. Deng, Z. Xu, Y. Li, Y. Zheng, Y. Zhang, L. Zhao, T. Zhang, and K. Wang, “A hitchhiker’s guide to jailbreaking chatgpt via prompt engineering,” in Proceedings of the 4th International Workshop on Software Engineering and AI for Data Quality in Cyber-Physical Systems/Internet of Things, 2024, pp. 12–21.
|
[151] |
Z. Sha and Y. Zhang, “Prompt stealing attacks against large language models,” arXiv preprint arXiv: 2402.12959, 2024.
|
[152] |
N. Carlini, F. Tramèr, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, Ú. Erlingsson, A. Oprea, and C. Raffel, “Extracting training data from large language models,” in Proceedings of the 30th USENIX Security Symposium (USENIX Security 21). USENIX Association, Aug. 2021, pp. 2633–2650.
|
[153] |
H.-M. Chu, J. Geiping, L. H. Fowl, M. Goldblum, and T. Goldstein, “Panning for gold in federated learning: Targeted text extraction under arbitrarily large-scale aggregation,” in Proceedings of the Eleventh International Conference on Learning Representations, 2023.
|
[154] |
J. Chu, Z. Sha, M. Backes, and Y. Zhang, “Conversation reconstruction attack against gpt models,” arXiv preprint arXiv: 2402.02987, 2024.
|
[155] |
C. Wei, K. Chen, Y. Zhao, Y. Gong, L. Xiang, and S. Zhu, “Con-text injection attacks on large language models,” arXiv preprint arXiv: 2405.20234, 2024.
|
[156] |
X. Zhang, Z. Zhang, S. Ji, and T. Wang, “Trojaning language models for fun and profit,” in Proceedings of the 2021 IEEE European Symposium on Security and Privacy (EuroS&P), 2021, pp. 179–197.
|
[157] |
S. Li, H. Liu, T. Dong, B. Z. H. Zhao, M. Xue, H. Zhu, and J. Lu, “Hidden backdoors in human-centric language models,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security (CCS ’21). New York, NY, USA: Association for Computing Machinery, 2021, pp. 3123–3140.
|
[158] |
X. Pan, M. Zhang, B. Sheng, J. Zhu, and M. Yang, “Hidden trigger backdoor attack on NLP models via linguistic style manipulation,” in Proceedings of the 31st USENIX Security Symposium (USENIX Security 22). Boston, MA: USENIX Association, Aug. 2022, pp. 3611–3628.
|
[159] |
Y. Huang, T. Y. Zhuo, Q. Xu, H. Hu, X. Yuan, and C. Chen, “Training-free lexical backdoor attacks on language models,” in Proceedings of the ACM Web Conference, 2023, pp. 2198–2208.
|
[160] |
H. J. Branch, J. R. Cefalu, J. McHugh, L. Hujer, A. Bahl, D. d. C. Iglesias, R. Heichman, and R. Darwishi, “Evaluating the susceptibility of pre-trained language models via handcrafted adversarial examples,” arXiv preprint arXiv: 2209.02128, 2022.
|
[161] |
Y. Liu, G. Shen, G. Tao, S. An, S. Ma, and X. Zhang, “Piccolo: Exposing complex backdoors in nlp transformer models,” in Proceedings of the 2022 IEEE Symposium on Security and Privacy (SP), 2022, pp. 2025–2042.
|
[162] |
D. Kang, X. Li, I. Stoica, C. Guestrin, M. Zaharia, and T. Hashimoto, “Exploiting programmatic behavior of llms: Dual-use through standard security attacks,” arXiv preprint arXiv: 2302.05733, 2023.
|
[163] |
X. Pan, M. Zhang, S. Ji, and M. Yang, “Privacy risks of general-purpose language models,” in Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP), 2020, pp. 1314–1331.
|
[164] |
Q. Xu, L. Qu, Z. Gao, and G. Haffari, “Personal information leakage detection in conversations,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Online: Association for Computational Linguistics, Nov. 2020, pp. 6567–6580.
|
[165] |
J. Huang, H. Shao, and K. C.-C. Chang, “Are large pre-trained language models leaking your personal information?” in Findings of the Association for Computational Linguistics: EMNLP 2022. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, Dec. 2022, pp. 2038–2047.
|
[166] |
B. Jayaraman, E. Ghosh, M. Chase, S. Roy, H. Inan, W. Dai, and D. Evans, “Combing for credentials: Active pattern extraction from smart reply,” arXiv preprint arXiv: 2207.10802, 2022.
|
[167] |
N. Lukas, A. Salem, R. Sim, S. Tople, L. Wutschitz, and S. Zanella-Béguelin, “Analyzing leakage of personally identifiable information in language models,” in Proceedings of IEEE Symposium on Security and Privacy (SP) 2023, 2023, pp. 346–363.
|
[168] |
F. Mireshghallah, A. Uniyal, T. Wang, D. Evans, and T. Berg-Kirkpatrick, “An empirical analysis of memorization in fine-tuned au-toregressive language models,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, Dec. 2022, pp. 1816–1826.
|
[169] |
N. Carlini, D. Ippolito, M. Jagielski, K. Lee, F. Tramer, and C. Zhang, “Quantifying memorization across neural language models,” in Proceedings of the Eleventh International Conference on Learning Repre-sentations, 2023.
|
[170] |
D. Ippolito, F. Tramèr, M. Nasr, C. Zhang, M. Jagielski, K. Lee, C. A. Choquette-Choo, and N. Carlini, “Preventing verbatim memorization in language models gives a false sense of privacy,” arXiv preprint arXiv: 2210.17546, 2022.
|
[171] |
H. Brown, K. Lee, F. Mireshghallah, R. Shokri, and F. Tramèr, “What does it mean for a language model to preserve privacy?” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22). New York, NY, USA: Association for Computing Machinery, 2022, pp. 2280–2292.
|
[172] |
J. Mattern, Z. Jin, B. Weggenmann, B. Schoelkopf, and M. Sachan, “Differentially private language models for secure data sharing,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, Dec. 2022, pp. 4860–4873.
|
[173] |
W. Shi, R. Shea, S. Chen, C. Zhang, R. Jia, and Z. Yu, “Just fine-tune twice: Selective differential privacy for large language models,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, Dec. 2022, pp. 6327–6340.
|
[174] |
X. Feng, X. Zhu, Q.-L. Han, W. Zhou, S. Wen, and Y. Xiang, “Detecting vulnerability on iot device firmware: A survey,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 1, pp. 25–41, 2022.
|
[175] |
X. Zhu, S. Wen, S. Camtepe, and Y. Xiang, “Fuzzing: a survey for roadmap,” ACM Computing Surveys (CSUR), vol. 54, no. 11s, pp. 1– 36, 2022.
|
[176] |
X. Zhu and M. Böhme, “Regression greybox fuzzing,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 2169–2182.
|
[177] |
D. Su, P. S. Stanimirović, L. B. Han, and L. Jin, “Neural dynamics for improving optimiser in deep learning with noise considered,” CAAI Transactions on Intelligence Technology, vol. 9, no. 3, pp. 722–737, 2024. doi: 10.1049/cit2.12263
|