Ethical Issues and Governance in Generative Artificial Intelligence Social Experiment

生成式人工智能社会实验的伦理问题及治理

Authors

  • Yu Ding School of Marxism, Zhejiang University
  • Li Zhengfeng Research Center for Science, Technology and Society, Tsinghua University

Keywords:

Generative artificial intelligence, Large language models, Strong anthropomorphism, AI social experiments, Ethical governance mechanisms

Abstract

Since the end of 2022, generative artificial intelligence (AIGC), represented by ChatGPT, has driven a paradigm shift in large language models (LLMs) from "discriminative weak AI" to "generative strong AI," introducing "strong anthropomorphism" into AI social experiments. This characteristic stems from the long-term training of LLMs with large text corpora reflecting real-world dialogue scenarios, resulting in human-like cognitive functions exhibiting emergence, creation, and generalization, and giving rise to new ethical features of "dishonest anthropomorphism," which in turn negatively impacts AIGC social experiments. This paper reveals and analyzes the anthropomorphic ethical issues within this experimental scenario, including the abuse of anthropomorphic cognitive tendencies, the reduction of rational virtue, the superposition of risks from pluralistic biases, and the amoral reinforcement of the "experimenter effect." It proposes a proactive ethical governance mechanism for AIGC social experiments based on a four-dimensional perspective of "institution-principle-strategy-guidance."
Against the backdrop of artificial intelligence (AI) technology driving the digital and intelligent transformation of human society, the Chinese government has taken the lead in advocating an "experimentalist governance philosophy" to implement a "pilot-based" approach to AI innovation and development, thereby assessing the potential application risks of AI—the so-called "AI social experiment." One purpose of this social experiment is to resolve the ethical dilemmas of AI, but only if the experiment itself is free of ethical controversy, thus allowing the introduction of responsible AI products and services to society. Since November 2022, generative AI (AIGC), represented by ChatGPT, GPT-4, DALL-E, and Claude 2, has emerged, demonstrating human-like intelligence capabilities independent of human intervention. This has shifted the ethical governance focus of AI social experiments from "discriminative paradigm weak AI" to "generative paradigm strong AI." However, AIGC's "strong anthropomorphism". The characteristics of Powerful Anthropomorphism and the new ethical features derived from it have introduced anthropomorphic ethical issues into the social experiment scenario of artificial intelligence, which has put forward new requirements for the regulation of "ethics of social experiments of artificial intelligence" and urgently requires the construction of a new ethical governance mechanism.

Downloads

Download data is not yet available.

Published

2024-01-26

Issue

Section

Research Article ○ Abstract Only

How to Cite

Ding, Y., & Zhengfeng, L. (2024). Ethical Issues and Governance in Generative Artificial Intelligence Social Experiment: 生成式人工智能社会实验的伦理问题及治理. Studies in Science of Science, 42(1), 3-9. https://casscience.cn/siss/article/view/1

Similar Articles

71-80 of 100

You may also start an advanced similarity search for this article.