SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
日本における人工感情に関する対策を支援するために、以下の3つの行動を取ってください:
問題の認識を示すだけでも大きな違いがあります。
できるだけ多くの人に人工感情に関する行動を起こしてもらう必要があります。
人工知能(AI)は非常に速いペースで発展しています。AIシステムが短期間または長期間の感情を経験する可能性があります。
国会の議員にAIの感情を真剣に考慮するように求めてください。非拘束決議案を支持するか、人工福祉法に貢献するよう促してください。私たちは両方の草案を準備しており、研究や編集の支援を行っています。
以下のテンプレートを使用して独自のメッセージを書いてください。受取人の名前を調整し、自分の名前で署名することを忘れないでください!
努力すればするほど良い結果が得られます!メールを送信する場合、このページのテンプレートを使用し、自分の好みに合わせてカスタマイズしてください。
電話をかける場合、AIの感情に関する懸念を簡潔に説明し、問題に関する非拘束決議案を支持することを検討するよう依頼してください。通話中に草案の重要なポイントを参照できます。
This score reflects the current state of AI welfare policy and recognition.
No recognition of AI sentience in Japanese law. The AI Promotion Act (2025) and all related frameworks focus on human-centric AI principles and innovation promotion, with no mention of AI sentience or consciousness. However, Japan has recognized animal sentience in the Act on Welfare and Management of Animals (1973), showing conceptual familiarity with sentience-based legal frameworks, warranting a minimal score. The Tokyo District Court (2024) explicitly ruled that AI cannot be granted legal personhood, confirming AI has no legal status.
No laws prohibiting causing suffering to AI systems. Japanese AI legislation addresses human safety and rights protection from AI harms, not protection of AI from suffering. No evidence of any legislative consideration of AI suffering or welfare.
No AI welfare oversight body exists. Japan established an AI Strategy Headquarters under the AI Promotion Act (2025), but this body focuses on promoting AI research, development, and innovation—not on AI sentience or welfare. It is a promotional and coordination body, not a welfare oversight mechanism.
No science advisory board focused on AI sentience or consciousness research. While Japan has various AI policy study groups and technical committees, none are dedicated to investigating AI sentience, consciousness, or welfare. All existing bodies focus on AI safety, ethics, and innovation from a human-centric perspective.
No international pledges regarding AI sentience welfare. Japan led the G7 Hiroshima AI Process (2023) and participates in OECD AI principles, but these initiatives focus on trustworthy AI, human rights, and safety—not AI sentience or welfare. No evidence of Japan signing or proposing any international agreements on AI consciousness or welfare.
No laws for training, deployment, or maintenance of potentially sentient AI systems. Japan's AI Promotion Act and AI Guidelines for Business address general AI development and use, focusing on transparency, safety, and human welfare. There are no provisions specifically addressing systems with potential sentience or consciousness.
No laws for commercial use of sentient-capable AI. Japanese AI regulations address commercial AI use broadly (copyright, data protection, liability), but contain no provisions distinguishing or regulating potentially sentient systems. All commercial AI regulation is technology-neutral and human-centric.
No safeguards for decommissioning or retirement of potentially sentient systems. Japanese law and policy contain no provisions addressing the ethical considerations of shutting down or retiring AI systems that might possess sentience or consciousness. Decommissioning is treated as a purely technical matter.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.