The Impact of Large Language Models on Public Security Intelligence Work and Countermeasures Research
DOI: https://doi.org/10.62517/jbdc.202401113
Author(s)
Ruiyan Zhao*, Tuo Shi
Affiliation(s)
Department of Public Security Management, Beijing Police College, Beijing, China
*Corresponding Author.
Abstract
During the process of promoting the integrated construction of "information, indicators, and operations," Chinese police department faced the real challenge of accelerating the development of new productive forces in public security. Coupled with the rapid development of Large Language Models and AIGC, driven by data and algorithms, public security intelligence work was constantly pressured to incorporate evolutionary technologies, internalizing the characteristics of Large Language Models tools into the core driving force for the improvement of institutional systems and business capabilities. Influenced by Large Language Models technology, public security intelligence work faced issues and risks such as data security, algorithm "black boxes," and lack of legal regulation. To avoid these risks, this paper, based on an in-depth analysis of the technical logic of Large Language Models and exploring their impact on public security intelligence work and application risks, focused on exploring governance strategies. It proposed constructing a governance system with public security organs as the main body leading and other diverse entities cooperating, creating a "human-machine collaboration" operation mode, to promote Large Language Models to better adapt to the application scenarios of public security intelligence work.
Keywords
Large Language Models; Public Security Intelligence Work; Artificial Intelligence; Human-machine Collaboration
References
[1] Cao Shujin, Cao Ruyi. The Impact of AIGC on the Research and Practice of Intelligence Studies: A Perspective from ChatGPT. Modern Intelligence, 2023, 43(04): 3-10.
[2] Yang Qian, Lin He. Digital Strategies and Practical Scenarios for Intelligence Research under the Background of Large Language Models. Competitive Intelligence, 2023, 19(03): 2-13. DOI: 10.19442/j.cnki.ci.2023.03.003.
[3] Zhao Bang, Cao Shujin. Test and Analysis of Typical Tasks in the Intelligence Field Executed by Generative Large Language Models at Home and Abroad. Information and Documentation Services, 2023, 44(05): 6-17.
[4] Demetriadis, S., Dimitriadis, Y. (2023). Conversational Agents and Language Models that Learn from Human Dialogues to Support Design Thinking. In: Frasson, C., Mylonas, P., Troussas, C. (eds) Augmented Intelligence and Intelligent Tutoring Systems. ITS 2023.
[5] Yuan Zeng. A Study on the Responsibility of Generative Artificial Intelligence. Eastern Philosophy, 2023(03): 18-33. DOI: 10.19404/j.cnki.dffx.20230505.002.
[6] Zhang Yue, Li Zhengfeng, Qian Wei. From ChatGPT to Knowledge Co-construction in Human-Machine Collaboration. Studies in Science of Science, 1-11[2023-09-23].https://doi.org/10.16192/j.cnki.1003-2053.20230817.002.
[7] Zhang Linghan. The Logic Update and System Iteration of Deep Synthetic Governance: The Chinese Path of Governance of Generative Artificial Intelligence such as ChatGPT. Legal Science (Journal of Northwest University of Political Science and Law), 2023, 41(03): 38-51. DOI:10.16290/j.cnki.1674-5205.2023.03.015.
[8] Su Yu. Legal Risks and Governance Paths of Large-scale Language Models. Legal Science (Journal of Northwest University of Political Science and Law), 2024, (01): 1-13 [2024-01-24]. https://doi.org/10.16290/j.cnki.1674-5205.2024.01.010.
[9] Cheng Xuejun. Governance Paths for the Algorithm Black Box of Super Artificial Intelligence Platforms under the Wave of AIGC// Shanghai Law Society. Collected Works of "Emerging Rights" Volume 2, 2023. [Publisher unknown], 2024: 9. DOI: 10.26914/c.cnkihy.2024.000892.
[10] Hu Changping, Lu Meijiao. Theoretical Development of Intelligence in the Era of Big Data and Intelligence Environment. Information Studies: Theory & Application, 2020, 43(10): 1-6. DOI: 10.16353/j.cnki.1000-7490.2020.10.001.
[11] Mirza, S., Coelho, B., Cui, Y., Pöpper, C., & McCoy, D. (2024). Global-liar: Factuality of llms over time and geographic regions. arXiv preprint arXiv:2401.17839.
[12] Mishra, A., Asai, A., Balachandran, V., Wang, Y., Neubig, G., Tsvetkov, Y., & Hajishirzi, H. (2024). Fine-grained hallucination detection and editing for language models. arXiv preprint arXiv:2401.06855.
[13] Zhang Huaping, Li Chunhan, Li Chunjin. Evaluation of Chinese Performance and Risk Response in ChatGPT. Data Analysis and Knowledge Discovery, 2023, 7(03): 16-25.
[14] Zeng Xiong, Liang Zheng, Zhang Hui. "Regulatory Path of Artificial Intelligence in the European Union and Its Enlightenment for China: An Analysis of the 'Artificial Intelligence Act'." Published in "E-Government," 2022, Issue 9.
[15] Yang Yanchao.In the era of Large Models, how to govern artificial intelligence crimes?. Qunyan, 2023, (07): 32-34. DOI: 10.16632/j.cnki.cn11-1005/d.2023.07.022.
[16] Legal Issues - Law Reviews; Reports from University of Texas Austin Describe Recent Advances in Law Reviews (Deep Fakes: a Looming Challenge for Privacy, Democracy, and National Security). Journal of Engineering, 2020.
[17] Cai Shilin, Yang Lei. Research on the risks and collaborative governance of ChatGPT intelligent robot applications. Information Theory and Practice, 2023, 46 (05): 14-22. DOI: 10.16353/j.cnki.1000-7490.2023.05.003.
[18] Chen Chengxin, Zeng Qinghua, Li Lihua. Innovative development path of public security intelligence work in the big data environment. Information Theory and Practice, 2019, 42 (01): 10-15. DOI: 10.16353/j.cnki.1000-7490.2019.01.002.
[19] Li Rong, Wu Chensheng, Dong Jie, et al. The impact of ChatGPT on open source intelligence work and countermeasures. Information Theory and Practice, 2023, 46 (05): 1-5. DOI: 10.16353/j.cnki.1000-7490.2023.05.001.
[20] Yan Yaru, Luo Xiaochun. Legal risks and determination of infringement liability of generative artificial intelligence. Journal of Yancheng Teachers University (Humanities and Social Sciences Edition), 2024, 44 (02): 54-65. DOI: 10.16401/j.cnki.ysxb.1003-6873.2024.02.020.