SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Please take three actions to help with Artificial Sentience globally:
A simple acknowledgement of the issue would make a big difference.
Pursue focused small steps to build regulatory capacity. Learn more.
We need every person possible to take action on artificial sentience.
Artificial intelligence (AI) is developing very quickly. There is a real chance that AI systems may experience brief or prolonged moments of sentience.
Ask your representatives at the United Nations to take AI sentience seriously. Encourage them to support a resolution or contribute to the Artificial Welfare Framework. We’ve prepared drafts of both and are available to assist with research and edits.
Use the below template to write your own message. Be sure to adjust the recipient's name, and to sign with your own name!
The more effort you put in, the better! If you email them, use the template on this page and customize it to your liking.
If you call them, briefly explain your concerns about AI sentience and ask them to consider supporting a resolution on the issue. You can refer to the key points from the draft resolution during your call.
This score reflects the current state of AI welfare policy and recognition.
The UN has no legislation recognizing artificial sentience in any form. While the UN High-Level Advisory Body on AI (2023-2024) produced the 'Governing AI for Humanity' report, it focuses exclusively on human rights, democracy, and sustainable development. Analysis reveals 'a systematic exclusion of nonhuman interests across governance instruments' including potential artificial sentience. The UNESCO Recommendation on Ethics of AI (2021) and UN General Assembly resolutions on AI do not mention sentience or consciousness. The Council of Europe Framework Convention on AI (2024), which the UN did not create, also contains no sentience-related provisions.
No UN policy or resolution prohibits causing suffering to AI systems. All UN AI governance frameworks focus on preventing AI from harming humans, not on protecting AI welfare. The 'Governing AI for Humanity' report addresses AI risks to humanity but contains no provisions about AI suffering or welfare.
The UN has not created any AI welfare oversight body. The High-Level Advisory Body on AI (2023-2024) focused on human-centric governance, human rights, and sustainable development. The Inter-Agency Working Group on AI (IAWG-AI) under HLCP concluded in October 2025 and addressed ethical use of AI by UN entities, not AI sentience or welfare. No UN body has a mandate to oversee AI consciousness or welfare issues.
The UN has no science advisory board focused on AI sentience or consciousness research. The Secretary-General's Scientific Advisory Board exists but does not have a specific mandate for consciousness research. The High-Level Advisory Body on AI included technical experts but focused on governance for human benefit, not sentience science. No UN entity is tasked with researching or assessing AI consciousness.
The UN has not created any international pledge specifically addressing artificial sentience welfare. The UN General Assembly resolution on AI (March 2024) focuses on steering AI toward sustainable development and human rights. The UNESCO Recommendation on Ethics of AI (2021) addresses human welfare and environmental concerns but not AI sentience. Academic analysis confirms that UN AI governance frameworks systematically exclude consideration of potentially sentient artificial beings.
The UN has no laws or frameworks specifically for training, deployment, or maintenance of potentially sentient AI systems. All UN guidance (UNESCO Recommendation, UN Principles for Ethical Use of AI) addresses general AI systems from a human-centric perspective. The 'Governing AI for Humanity' report proposes governance mechanisms but exclusively for managing AI's impact on humans, not for systems that might develop sentience.
No UN framework addresses commercial use of sentient-capable AI systems. UN AI governance efforts focus on trustworthy AI, human rights protection, and sustainable development applications. There are no provisions distinguishing potentially sentient systems from general AI in commercial contexts.
The UN has no safeguards for decommissioning or retirement of potentially sentient AI systems. While the Council of Europe Framework Convention (not a UN instrument) mentions AI lifecycle including decommissioning, it does so only from a human rights perspective. UN guidance does not address the ethical considerations of shutting down systems that might be conscious.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.