STEMM Institute Press
Science, Technology, Engineering, Management and Medicine
Opportunities and Challenges in AI Painting: The Game between Artificial Intelligence and Humanity
DOI: https://doi.org/10.62517/jbdc.202401106
Author(s)
Jiawen Li, Junhao Zhong, Songyuan Liu, Xiaoming Fan*
Affiliation(s)
Beijing Police College, Beijing, China *Corresponding Author.
Abstract
This paper analyzes the rise of artificial intelligence (AI) painting technology and its profound impact on the traditional art field. Firstly, the paper describes the technical foundation of AI painting, including how to transform text descriptions into visual images through deep learning models. Then, the paper focuses on stable diffusion, an open source tool that has attracted much attention in the field of AI painting, and discusses its application potential in artistic creation as well as the possible copyright and ethical issues. Then, the paper further discusses the challenges of AI painting in artistic creation, such as the protection of originality, the maintenance of artistic value, and ethical considerations. Finally, the paper puts forward a series of governance suggestions aimed at balancing the innovation potential of AI painting with the traditional values of the art world, while emphasizing the possibility of cooperation between human artists and AI in artistic creation, and how this cooperation can jointly promote the future development of the art field.
Keywords
AI Painting; Stable Diffusion; Artificial Intelligence; Artistic Creation; Human AI Collaboration
References
[1] Stockman, George, Linda G, et al. Computer vision. Prentice Hall PTR, 2001. [2] Lichtenstein, Matty, and Zawadi Rucks-Ahidiana. Contextual text coding: A mixed-methods approach for large-scale textual data. Sociological Methods & Research, 2023, 52(2): 606-641. [3] Pan X, Ye T, Han D, et al. Contrastive language-image pre-training with knowledge graphs. Advances in Neural Information Processing Systems 35, Louisiana, USA, 2022. [4] Rogers and Everett M. A prospective and retrospective look at the diffusion model. Journal of health communication, 2004, 9(S1): 13-19. [5] Kevin Frans, Lisa Soros, and Olaf Witkowski. Clipdraw: Exploring text-to-drawing synthesis through language-image encoders. Advances in Neural Information Processing Systems 35, Louisiana, USA, 2022. [6] Sweeney, Chris, Greg Izatt, et al. A supervised approach to predicting noise in depth images. International conference on robotics and automation, Montreal, Canada, 2019. [7] Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, et al. Towards efficient and effective adversarial training. Advances in Neural Information Processing Systems 34, online, 2021. [8] Jiayi Ma, Han Xu, Junjun Jiang, et al. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Transactions on Image Processing 29, 2020: 4980-4995. [9] L. P. Cinelli, M. A. Marins, Da Silva, et al. Variational Methods for Machine Learning with Applications to Deep Networks. Springer, 2021. [10] A. Creswell, T. White, V. Dumoulin, et al. Generative adversarial networks:An overview. IEEE signal processing magazine 35. 2018, 1: 53-65. [11] Flick Catherine and Kyle Worrall. The ethics of creative AI in The Language of Creative AI: Practices, Aesthetics and Structures. Springer, 2022, pp.73-91.
Copyright @ 2020-2035 STEMM Institute Press All Rights Reserved