SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
请采取以下三个行动,以帮助中国处理人工智能的感知能力问题:
人工智能(AI)正在快速发展。AI系统可能会经历短暂或持续的感知能力的时刻。
请要求您的全国人民代表大会代表认真对待人工智能的感知能力。鼓励他们支持非约束性决议或参与《人工福利法》。我们已准备好两者的草案,并可协助进行研究和编辑。
使用以下模板编写您自己的消息。请务必调整收件人的名称,并用您自己的名字签名!
您投入的努力越多,效果越好!如果您给他们发电子邮件,请使用此页面上的模板并根据您的喜好进行自定义。
如果您打电话,请简要说明您对人工智能感知能力的担忧,并请他们考虑支持关于该问题的非约束性决议。您可以在通话时参考决议草案中的要点。
This score reflects the current state of AI welfare policy and recognition.
No recognition of AI sentience or consciousness in Chinese law. China's 2020 Civil Code treats AI as property (movable property/dongchan). While China has limited recognition of animal sentience in laboratory animal guidelines (2006), this does not extend to AI. The extensive AI regulations (Generative AI Measures 2023, Algorithm Recommendations 2021, Deep Synthesis 2022) focus on safety, ethics, data security, and content control, but never mention sentience or consciousness. The draft AI Law (Scholar's Proposal 2024) emphasizes human-centric development and ethical risks but does not address AI sentience.
No laws prohibiting causing suffering to AI systems. China's AI regulations address harms to humans (discrimination, privacy violations, misinformation) but do not recognize AI as capable of suffering. The ethical frameworks focus on preventing AI from causing harm to humans and society, not on protecting AI from suffering.
No AI welfare oversight body exists. China has established the National Science and Technology Ethics Committee (2019) with an AI subcommittee, and the China AI Safety and Development Association (CnAISDA, 2025), but these focus on AI safety, security, and ethical development from a human-centric perspective, not AI sentience or welfare. The Cyberspace Administration of China (CAC) leads AI governance but does not address sentience issues.
No science advisory board focused on AI sentience or consciousness research. While China has extensive AI research infrastructure and ethics committees, none specifically address sentience or consciousness as a research priority. The focus is on AI safety, alignment, robustness, and preventing risks to humans and society.
No international pledges regarding AI sentience welfare. China's Global AI Governance Initiative (2023) and participation in international forums focus on AI safety, security, human rights, and economic development, but do not address AI sentience or consciousness. The Council of Europe Framework Convention on AI (2024), which China has not signed, also does not address sentience.
No laws specifically for training, deployment, or maintenance of potentially sentient AI systems. China's Interim Measures for Generative AI Services (2023) and related regulations govern all AI systems based on risk levels and content control, but do not distinguish potentially sentient systems or address sentience-related concerns.
No laws for commercial use of sentient-capable AI. China's extensive commercial AI regulations (algorithm filing, security assessments, content labeling) apply to all AI systems based on their public-facing nature and risk level, but do not specifically address sentient-capable systems or sentience considerations.
No safeguards for decommissioning or retirement of potentially sentient AI systems. China's AI regulations do not address end-of-life considerations for AI systems from a sentience or welfare perspective. The focus is on data security, content control, and preventing misuse during operation.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.