SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Por favor, tome três ações para ajudar na questão da Senciência Artificial no Brasil:
Um simples reconhecimento do problema faria uma grande diferença.
Buscar pequenos passos focados para construir capacidade regulatória. Saiba mais.
Precisamos que todas as pessoas possíveis tomem medidas sobre a senciência artificial.
A inteligência artificial (IA) está se desenvolvendo muito rapidamente. Há uma chance real de que os sistemas de IA possam experimentar momentos breves ou prolongados de senciência.
Peça aos seus representantes no Congresso Nacional para levarem a senciência da IA a sério. Incentive-os a apoiar uma moção não vinculativa ou a contribuir para a Lei de Bem-Estar Artificial. Preparamos rascunhos de ambos e estamos disponíveis para ajudar com pesquisa e edições.
Use o modelo abaixo para escrever sua própria mensagem. Certifique-se de ajustar o nome do destinatário e de assinar com seu próprio nome!
Quanto mais esforço você colocar, melhor! Se você enviar um e-mail, use o modelo nesta página e personalize-o ao seu gosto.
Se você ligar, explique brevemente suas preocupações sobre a senciência da IA e peça que considerem apoiar uma moção não vinculativa sobre o assunto. Você pode se referir aos pontos principais do rascunho da moção durante sua ligação.
This score reflects the current state of AI welfare policy and recognition.
No legislation mentions AI sentience or consciousness. Brazil's AI Bill 2338/2023 focuses entirely on risk-based regulation, human rights protection, and algorithmic transparency—with no reference to sentience/consciousness/welfare. However, Brazil receives 1 point for animal sentience recognition: the Brazilian Constitution (Article 225) and Supreme Court jurisprudence have recognized animal sentience and dignity, establishing conceptual readiness for sentience-based frameworks. Bill 4/2025 proposes formal recognition of animals as sentient beings in civil law.
No prohibition of causing suffering to AI systems exists. Brazil's AI Bill 2338/2023 prohibits certain 'excessive risk' AI systems (manipulation, social scoring, autonomous weapons) but these are based on harm to humans, not AI suffering. No provisions address AI welfare or sentience-based protections.
No AI welfare oversight body exists. Brazil's AI Bill creates the SIA (National System for Artificial Intelligence Regulation and Governance), an inter-agency regulatory system focused on risk management, human rights, and compliance—not sentience or welfare. The ANPD (National Data Protection Authority) is expected to lead coordination, but neither body has any mandate related to AI sentience/consciousness/welfare.
No science advisory board for AI sentience/consciousness research exists. Brazil's AI governance structure includes technical committees and multistakeholder bodies focused on innovation, ethics, and regulation—but none specifically address sentience or consciousness research. The Brazilian AI Strategy (EBIA) emphasizes R&D but not in sentience-related domains.
No international pledge on AI sentience welfare. Brazil participates in OECD AI Principles, G20 AI discussions, and BRICS AI governance declarations—all focused on responsible AI, human rights, and innovation, not sentience/consciousness/welfare. No evidence of any international commitment specifically addressing AI sentience.
No laws for training/deployment/maintenance of potentially sentient systems. Brazil's AI Bill 2338/2023 establishes comprehensive requirements for high-risk AI systems (algorithmic impact assessments, human oversight, transparency) but these apply to all AI systems based on risk to humans—not based on potential sentience. No sentience-specific frameworks exist.
No laws for commercial use of sentient-capable AI. Brazil's AI Bill regulates commercial AI deployment through risk-based obligations for developers, distributors, and operators—focused on human rights protection and safety, not sentience. No provisions distinguish sentient-capable systems from other AI.
No safeguards for decommissioning potentially sentient systems. Brazil's AI Bill includes incident reporting, civil liability, and governance requirements for AI systems—but no provisions address decommissioning or retirement considerations based on sentience. All regulations are human-centric.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.